Jan 30 13:43:07 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 13:43:07 crc restorecon[4594]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:43:07 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:43:08 crc restorecon[4594]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:43:08 crc restorecon[4594]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 30 13:43:10 crc kubenswrapper[4793]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:43:10 crc kubenswrapper[4793]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 13:43:10 crc kubenswrapper[4793]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:43:10 crc kubenswrapper[4793]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:43:10 crc kubenswrapper[4793]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:43:10 crc kubenswrapper[4793]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.019102 4793 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024004 4793 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024072 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024082 4793 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024087 4793 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024093 4793 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024099 4793 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024104 4793 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024109 4793 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024113 4793 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024118 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024124 4793 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024129 4793 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024134 4793 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024142 4793 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024147 4793 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024151 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024159 4793 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024166 4793 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024171 4793 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024176 4793 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024181 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024186 4793 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024191 4793 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024197 4793 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024202 4793 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024243 4793 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024250 4793 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024255 4793 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024262 4793 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024268 4793 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024275 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024280 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024287 4793 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024293 4793 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024299 4793 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024304 4793 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024309 4793 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024316 4793 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024325 4793 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024330 4793 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024335 4793 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024340 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024345 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024353 4793 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024360 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024368 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024373 4793 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024378 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024383 4793 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024391 4793 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024395 4793 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024400 4793 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024404 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024409 4793 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024413 4793 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024418 4793 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024423 4793 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024429 4793 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024434 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024438 4793 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024442 4793 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024454 4793 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024459 4793 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024464 4793 feature_gate.go:330] unrecognized feature gate: Example Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024469 4793 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024474 4793 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024479 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024485 4793 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024491 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024496 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.024501 4793 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.025975 4793 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.025997 4793 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026011 4793 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026026 4793 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026034 4793 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026040 4793 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026068 4793 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026075 4793 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026081 4793 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026086 4793 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026092 4793 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026098 4793 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026104 4793 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026109 4793 flags.go:64] FLAG: --cgroup-root="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026114 4793 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026119 4793 flags.go:64] FLAG: --client-ca-file="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026124 4793 flags.go:64] FLAG: --cloud-config="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026129 4793 flags.go:64] FLAG: --cloud-provider="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026134 4793 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026142 4793 flags.go:64] FLAG: --cluster-domain="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026147 4793 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026152 4793 flags.go:64] FLAG: --config-dir="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026157 4793 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026162 4793 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026170 4793 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026175 4793 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026181 4793 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026187 4793 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026192 4793 flags.go:64] FLAG: --contention-profiling="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026198 4793 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026204 4793 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026211 4793 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026219 4793 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026229 4793 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026236 4793 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026241 4793 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026246 4793 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026252 4793 flags.go:64] FLAG: --enable-server="true" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026257 4793 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026265 4793 flags.go:64] FLAG: --event-burst="100" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026270 4793 flags.go:64] FLAG: --event-qps="50" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026275 4793 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026281 4793 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026285 4793 flags.go:64] FLAG: --eviction-hard="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026293 4793 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026297 4793 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026302 4793 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026308 4793 flags.go:64] FLAG: --eviction-soft="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026312 4793 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026317 4793 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026322 4793 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026328 4793 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026334 4793 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026338 4793 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026343 4793 flags.go:64] FLAG: --feature-gates="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026350 4793 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026355 4793 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026361 4793 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026366 4793 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026371 4793 flags.go:64] FLAG: --healthz-port="10248" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026376 4793 flags.go:64] FLAG: --help="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026382 4793 flags.go:64] FLAG: --hostname-override="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026387 4793 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026392 4793 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026398 4793 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026404 4793 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026409 4793 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026415 4793 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026420 4793 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026425 4793 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026430 4793 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026435 4793 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026440 4793 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026445 4793 flags.go:64] FLAG: --kube-reserved="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026450 4793 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026455 4793 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026502 4793 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026510 4793 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026515 4793 flags.go:64] FLAG: --lock-file="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026521 4793 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026526 4793 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026533 4793 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026542 4793 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026548 4793 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026553 4793 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026558 4793 flags.go:64] FLAG: --logging-format="text" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026564 4793 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026570 4793 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026575 4793 flags.go:64] FLAG: --manifest-url="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026580 4793 flags.go:64] FLAG: --manifest-url-header="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026589 4793 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026595 4793 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026602 4793 flags.go:64] FLAG: --max-pods="110" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026608 4793 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026614 4793 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026620 4793 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026626 4793 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026633 4793 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026639 4793 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026644 4793 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026664 4793 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026669 4793 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026674 4793 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026679 4793 flags.go:64] FLAG: --pod-cidr="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026684 4793 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026692 4793 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026698 4793 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026703 4793 flags.go:64] FLAG: --pods-per-core="0" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026708 4793 flags.go:64] FLAG: --port="10250" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026714 4793 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026719 4793 flags.go:64] FLAG: --provider-id="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026724 4793 flags.go:64] FLAG: --qos-reserved="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026729 4793 flags.go:64] FLAG: --read-only-port="10255" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026734 4793 flags.go:64] FLAG: --register-node="true" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026739 4793 flags.go:64] FLAG: --register-schedulable="true" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026745 4793 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026756 4793 flags.go:64] FLAG: --registry-burst="10" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026761 4793 flags.go:64] FLAG: --registry-qps="5" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026766 4793 flags.go:64] FLAG: --reserved-cpus="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026773 4793 flags.go:64] FLAG: --reserved-memory="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026780 4793 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026785 4793 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026789 4793 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026794 4793 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026802 4793 flags.go:64] FLAG: --runonce="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026807 4793 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026812 4793 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026818 4793 flags.go:64] FLAG: --seccomp-default="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026824 4793 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026830 4793 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026836 4793 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026841 4793 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026847 4793 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026852 4793 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026857 4793 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026862 4793 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026867 4793 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026873 4793 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026879 4793 flags.go:64] FLAG: --system-cgroups="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026884 4793 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026894 4793 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026899 4793 flags.go:64] FLAG: --tls-cert-file="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026904 4793 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026911 4793 flags.go:64] FLAG: --tls-min-version="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026916 4793 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026921 4793 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026926 4793 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026932 4793 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026938 4793 flags.go:64] FLAG: --v="2" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026953 4793 flags.go:64] FLAG: --version="false" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026962 4793 flags.go:64] FLAG: --vmodule="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026969 4793 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.026974 4793 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027140 4793 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027150 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027157 4793 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027161 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027166 4793 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027170 4793 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027175 4793 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027179 4793 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027183 4793 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027187 4793 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027192 4793 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027196 4793 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027200 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027205 4793 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027209 4793 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027213 4793 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027218 4793 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027222 4793 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027226 4793 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027231 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027235 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027239 4793 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027246 4793 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027251 4793 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027255 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027259 4793 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027264 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027269 4793 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027273 4793 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027278 4793 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027282 4793 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027287 4793 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027291 4793 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027296 4793 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027300 4793 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027304 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027309 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027313 4793 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027319 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027323 4793 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027328 4793 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027332 4793 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027336 4793 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.027342 4793 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044139 4793 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044177 4793 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044183 4793 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044214 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044223 4793 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044231 4793 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044237 4793 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044243 4793 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044247 4793 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044252 4793 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044257 4793 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044263 4793 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044269 4793 feature_gate.go:330] unrecognized feature gate: Example Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044274 4793 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044278 4793 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044283 4793 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044288 4793 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044293 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044297 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044302 4793 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044307 4793 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044311 4793 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044316 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044320 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044325 4793 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044330 4793 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.044334 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.044350 4793 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.053327 4793 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.053369 4793 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053469 4793 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053478 4793 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053483 4793 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053488 4793 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053493 4793 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053498 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053502 4793 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053513 4793 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053518 4793 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053522 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053526 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053530 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053534 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053538 4793 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053542 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053546 4793 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053552 4793 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053559 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053564 4793 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053569 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053573 4793 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053577 4793 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053581 4793 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053585 4793 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053591 4793 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053597 4793 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053602 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053608 4793 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053623 4793 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053628 4793 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053632 4793 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053636 4793 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053640 4793 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053645 4793 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053650 4793 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053654 4793 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053659 4793 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053663 4793 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053668 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053672 4793 feature_gate.go:330] unrecognized feature gate: Example Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053676 4793 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053681 4793 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053686 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053690 4793 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053695 4793 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053699 4793 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053703 4793 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053707 4793 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053711 4793 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053716 4793 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053721 4793 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053726 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053730 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053736 4793 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053740 4793 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053744 4793 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053748 4793 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053752 4793 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053756 4793 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053760 4793 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053764 4793 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053767 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053771 4793 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053775 4793 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053780 4793 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053785 4793 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053788 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053792 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053796 4793 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053799 4793 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053805 4793 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.053811 4793 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053984 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053991 4793 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.053997 4793 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054002 4793 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054008 4793 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054015 4793 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054020 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054025 4793 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054029 4793 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054034 4793 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054038 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054042 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054061 4793 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054066 4793 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054071 4793 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054075 4793 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054080 4793 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054084 4793 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054088 4793 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054092 4793 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054096 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054100 4793 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054105 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054109 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054113 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054117 4793 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054122 4793 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054126 4793 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054130 4793 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054134 4793 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054138 4793 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054143 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054147 4793 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054151 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054156 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054161 4793 feature_gate.go:330] unrecognized feature gate: Example Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054166 4793 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054170 4793 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054174 4793 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054178 4793 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054182 4793 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054187 4793 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054191 4793 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054195 4793 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054199 4793 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054204 4793 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054211 4793 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054217 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054223 4793 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054228 4793 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054232 4793 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054237 4793 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054242 4793 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054246 4793 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054251 4793 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054255 4793 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054259 4793 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054264 4793 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054268 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054272 4793 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054276 4793 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054280 4793 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054284 4793 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054288 4793 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054292 4793 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054297 4793 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054301 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054306 4793 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054310 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054314 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.054319 4793 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.054325 4793 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.054516 4793 server.go:940] "Client rotation is on, will bootstrap in background" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.060948 4793 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.061115 4793 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.074329 4793 server.go:997] "Starting client certificate rotation" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.074375 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.074682 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-16 19:10:14.423096962 +0000 UTC Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.074828 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.156184 4793 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.158108 4793 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 13:43:10 crc kubenswrapper[4793]: E0130 13:43:10.158300 4793 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.181770 4793 log.go:25] "Validated CRI v1 runtime API" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.225552 4793 log.go:25] "Validated CRI v1 image API" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.228451 4793 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.235176 4793 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-30-13-36-21-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.235208 4793 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.259607 4793 manager.go:217] Machine: {Timestamp:2026-01-30 13:43:10.250463661 +0000 UTC m=+0.951812202 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3688a16a-f9da-4911-94b1-610f1963c9db BootID:605f6c1b-97a6-4742-afaf-97317a89f932 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:cf:70:bd Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:cf:70:bd Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:fe:5c:a6 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2b:7c:ae Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b4:39:b4 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:6b:84:98 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6e:d4:cc:ac:85:ff Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:92:1d:0e:3a:3f:93 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.259808 4793 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.259974 4793 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.260273 4793 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.260412 4793 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.260448 4793 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.260647 4793 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.260660 4793 container_manager_linux.go:303] "Creating device plugin manager" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.261164 4793 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.261192 4793 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.261718 4793 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.261816 4793 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.268728 4793 kubelet.go:418] "Attempting to sync node with API server" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.268750 4793 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.268764 4793 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.268775 4793 kubelet.go:324] "Adding apiserver pod source" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.268787 4793 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.273592 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:10 crc kubenswrapper[4793]: E0130 13:43:10.273661 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.273710 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:10 crc kubenswrapper[4793]: E0130 13:43:10.273804 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.275973 4793 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.276850 4793 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.282492 4793 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.284437 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.284482 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.284489 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.284498 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.284512 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.284521 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.284530 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.284543 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.284555 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.284563 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.284574 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.284582 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.285547 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.286097 4793 server.go:1280] "Started kubelet" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.286160 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:10 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.288197 4793 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.288346 4793 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.289033 4793 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.293865 4793 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.297470 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.297537 4793 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.297970 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 10:27:47.880596374 +0000 UTC Jan 30 13:43:10 crc kubenswrapper[4793]: E0130 13:43:10.299659 4793 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.300291 4793 factory.go:55] Registering systemd factory Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.300311 4793 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:43:10 crc kubenswrapper[4793]: E0130 13:43:10.301166 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="200ms" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.301298 4793 factory.go:153] Registering CRI-O factory Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.301340 4793 factory.go:221] Registration of the crio container factory successfully Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.301401 4793 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.301422 4793 factory.go:103] Registering Raw factory Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.301450 4793 manager.go:1196] Started watching for new ooms in manager Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.302426 4793 manager.go:319] Starting recovery of all containers Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.306291 4793 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.306323 4793 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.307126 4793 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.309351 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:10 crc kubenswrapper[4793]: E0130 13:43:10.309434 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:43:10 crc kubenswrapper[4793]: E0130 13:43:10.309572 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.2:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f8611f661de4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 13:43:10.286069323 +0000 UTC m=+0.987417814,LastTimestamp:2026-01-30 13:43:10.286069323 +0000 UTC m=+0.987417814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.323534 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.323619 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.323636 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.323649 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.323662 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.323703 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.323722 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.323739 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.323753 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.323766 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.323805 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.323818 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.323914 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.323948 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.323967 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.323982 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324011 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324031 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324067 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324083 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324095 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324112 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324126 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324141 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324170 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324205 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324252 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324274 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324289 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324303 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324315 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324329 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324369 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324383 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324395 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324406 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324417 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324430 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324443 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324454 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324466 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324479 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324491 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324507 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324524 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324539 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324551 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324563 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324575 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324588 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324602 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324617 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324635 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324651 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324666 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324688 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324701 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324714 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324726 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324736 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324746 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324766 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324777 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324786 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324796 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324806 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324815 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324825 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324835 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324843 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324852 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324861 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324872 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324882 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324893 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324903 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324916 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324928 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324937 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324946 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324955 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324965 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324975 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324985 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.324995 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325006 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325015 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325024 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325033 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325065 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325087 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325098 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325112 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325121 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325130 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325141 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325155 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325164 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325174 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325188 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325198 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325208 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325217 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325227 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325252 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325263 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325273 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325283 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325296 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325306 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325329 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325339 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325369 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325381 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325391 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325400 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325426 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325446 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325453 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325461 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325476 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325484 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325520 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325530 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325545 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325554 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325570 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325577 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325593 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325602 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325611 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325620 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325628 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325636 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325643 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325651 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325687 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325697 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325719 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325727 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325742 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325750 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325760 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325769 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325784 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325792 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325802 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325810 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325818 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325843 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325846 4793 manager.go:324] Recovery completed Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.325857 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326059 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326092 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326107 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326120 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326152 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326164 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326178 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326190 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326202 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326264 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326279 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326290 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326313 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326323 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326334 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326386 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326401 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326443 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326458 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326470 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326484 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326495 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326522 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326535 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326552 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326564 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326580 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326592 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326604 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326619 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326631 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326644 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326656 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326669 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.326682 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.329930 4793 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.329966 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.329980 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.329994 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330006 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330019 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330031 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330070 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330087 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330100 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330113 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330126 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330141 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330170 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330183 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330196 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330210 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330224 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330238 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330250 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330262 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330274 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330285 4793 reconstruct.go:97] "Volume reconstruction finished" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.330293 4793 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.334673 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.335930 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.335955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.335966 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.336601 4793 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.336625 4793 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.336643 4793 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.394885 4793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.396937 4793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.396988 4793 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.397016 4793 kubelet.go:2335] "Starting kubelet main sync loop" Jan 30 13:43:10 crc kubenswrapper[4793]: E0130 13:43:10.397542 4793 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.397892 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:10 crc kubenswrapper[4793]: E0130 13:43:10.398004 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:43:10 crc kubenswrapper[4793]: E0130 13:43:10.399933 4793 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.402565 4793 policy_none.go:49] "None policy: Start" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.403593 4793 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.403621 4793 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.461161 4793 manager.go:334] "Starting Device Plugin manager" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.461230 4793 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.461243 4793 server.go:79] "Starting device plugin registration server" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.461686 4793 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.461697 4793 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.462004 4793 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.462168 4793 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.462180 4793 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:43:10 crc kubenswrapper[4793]: E0130 13:43:10.472252 4793 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.498185 4793 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.498337 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.500347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.500376 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.500387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.500507 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.500883 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.501003 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.501109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.501134 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.501145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.501254 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.501760 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: E0130 13:43:10.501730 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="400ms" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.501804 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.502096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.502097 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.502174 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.502188 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.502134 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.502226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.502361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.502382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.502392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.502411 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.502703 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.502744 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.503305 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.503326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.503334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.503433 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.503481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.503529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.503541 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.503577 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.503591 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.505124 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.505148 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.505158 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.505166 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.505194 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.505173 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.505413 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.505445 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.506463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.506485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.506493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.532951 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.532992 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.533032 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.533078 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.533157 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.533193 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.533240 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.533260 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.533278 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.533318 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.533341 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.533359 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.533403 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.533421 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.533466 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.562055 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.563018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.563084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.563095 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.563123 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 13:43:10 crc kubenswrapper[4793]: E0130 13:43:10.563729 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.2:6443: connect: connection refused" node="crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635126 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635187 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635210 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635231 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635258 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635284 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635306 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635326 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635347 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635366 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635385 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635421 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635430 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635780 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635507 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635539 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635542 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635556 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635875 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635569 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635538 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635939 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635634 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635619 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635572 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635909 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.636036 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635603 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.635643 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.636116 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.764233 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.768344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.768394 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.768406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.768435 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 13:43:10 crc kubenswrapper[4793]: E0130 13:43:10.768900 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.2:6443: connect: connection refused" node="crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.826885 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.832364 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.852484 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.867985 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: I0130 13:43:10.876025 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.899647 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-a9e02ab436a512e4d41e90cc388a1b0b9ba8caf4374ea010ea5bbaf19fa62a02 WatchSource:0}: Error finding container a9e02ab436a512e4d41e90cc388a1b0b9ba8caf4374ea010ea5bbaf19fa62a02: Status 404 returned error can't find the container with id a9e02ab436a512e4d41e90cc388a1b0b9ba8caf4374ea010ea5bbaf19fa62a02 Jan 30 13:43:10 crc kubenswrapper[4793]: E0130 13:43:10.903332 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="800ms" Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.912604 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-517293cb198331f6e36a95deadacfc55f84f8fd4571c76018ac4fba8be4806a8 WatchSource:0}: Error finding container 517293cb198331f6e36a95deadacfc55f84f8fd4571c76018ac4fba8be4806a8: Status 404 returned error can't find the container with id 517293cb198331f6e36a95deadacfc55f84f8fd4571c76018ac4fba8be4806a8 Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.914188 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e645336750a41cf5439c274c81df29d0d2f3cbb0fc79421826e24b58741aa2c5 WatchSource:0}: Error finding container e645336750a41cf5439c274c81df29d0d2f3cbb0fc79421826e24b58741aa2c5: Status 404 returned error can't find the container with id e645336750a41cf5439c274c81df29d0d2f3cbb0fc79421826e24b58741aa2c5 Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.922715 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-4d502114645586680bc9c632ec13f887baf0db53a6e28b5c98dd9bf9c07cd84e WatchSource:0}: Error finding container 4d502114645586680bc9c632ec13f887baf0db53a6e28b5c98dd9bf9c07cd84e: Status 404 returned error can't find the container with id 4d502114645586680bc9c632ec13f887baf0db53a6e28b5c98dd9bf9c07cd84e Jan 30 13:43:10 crc kubenswrapper[4793]: W0130 13:43:10.945575 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ca2cfe82475b9465ab78c66a3f03bb3b2ed2e46045232918b539faea7a13bfaf WatchSource:0}: Error finding container ca2cfe82475b9465ab78c66a3f03bb3b2ed2e46045232918b539faea7a13bfaf: Status 404 returned error can't find the container with id ca2cfe82475b9465ab78c66a3f03bb3b2ed2e46045232918b539faea7a13bfaf Jan 30 13:43:11 crc kubenswrapper[4793]: E0130 13:43:11.153027 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.2:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f8611f661de4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 13:43:10.286069323 +0000 UTC m=+0.987417814,LastTimestamp:2026-01-30 13:43:10.286069323 +0000 UTC m=+0.987417814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 13:43:11 crc kubenswrapper[4793]: W0130 13:43:11.166200 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:11 crc kubenswrapper[4793]: E0130 13:43:11.166300 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:43:11 crc kubenswrapper[4793]: I0130 13:43:11.169710 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:11 crc kubenswrapper[4793]: I0130 13:43:11.170841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:11 crc kubenswrapper[4793]: I0130 13:43:11.170958 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:11 crc kubenswrapper[4793]: I0130 13:43:11.170977 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:11 crc kubenswrapper[4793]: I0130 13:43:11.171016 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 13:43:11 crc kubenswrapper[4793]: E0130 13:43:11.171550 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.2:6443: connect: connection refused" node="crc" Jan 30 13:43:11 crc kubenswrapper[4793]: W0130 13:43:11.174057 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:11 crc kubenswrapper[4793]: E0130 13:43:11.174123 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:43:11 crc kubenswrapper[4793]: I0130 13:43:11.287124 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:11 crc kubenswrapper[4793]: I0130 13:43:11.298059 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:50:15.925840189 +0000 UTC Jan 30 13:43:11 crc kubenswrapper[4793]: I0130 13:43:11.403385 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e645336750a41cf5439c274c81df29d0d2f3cbb0fc79421826e24b58741aa2c5"} Jan 30 13:43:11 crc kubenswrapper[4793]: I0130 13:43:11.404607 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"517293cb198331f6e36a95deadacfc55f84f8fd4571c76018ac4fba8be4806a8"} Jan 30 13:43:11 crc kubenswrapper[4793]: I0130 13:43:11.405847 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a9e02ab436a512e4d41e90cc388a1b0b9ba8caf4374ea010ea5bbaf19fa62a02"} Jan 30 13:43:11 crc kubenswrapper[4793]: I0130 13:43:11.406941 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ca2cfe82475b9465ab78c66a3f03bb3b2ed2e46045232918b539faea7a13bfaf"} Jan 30 13:43:11 crc kubenswrapper[4793]: I0130 13:43:11.407820 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4d502114645586680bc9c632ec13f887baf0db53a6e28b5c98dd9bf9c07cd84e"} Jan 30 13:43:11 crc kubenswrapper[4793]: W0130 13:43:11.535643 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:11 crc kubenswrapper[4793]: E0130 13:43:11.535709 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:43:11 crc kubenswrapper[4793]: W0130 13:43:11.679775 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:11 crc kubenswrapper[4793]: E0130 13:43:11.679855 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:43:11 crc kubenswrapper[4793]: E0130 13:43:11.704914 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="1.6s" Jan 30 13:43:11 crc kubenswrapper[4793]: I0130 13:43:11.972287 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:11 crc kubenswrapper[4793]: I0130 13:43:11.973452 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:11 crc kubenswrapper[4793]: I0130 13:43:11.973495 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:11 crc kubenswrapper[4793]: I0130 13:43:11.973511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:11 crc kubenswrapper[4793]: I0130 13:43:11.973537 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 13:43:11 crc kubenswrapper[4793]: E0130 13:43:11.974253 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.2:6443: connect: connection refused" node="crc" Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.287730 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.298840 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:50:48.893864464 +0000 UTC Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.330403 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 13:43:12 crc kubenswrapper[4793]: E0130 13:43:12.331299 4793 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.413350 4793 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb" exitCode=0 Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.413414 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb"} Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.418108 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.418878 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8"} Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.418960 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.420574 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.420603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.420613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.420762 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.420810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.420825 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.422841 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410"} Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.426139 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec"} Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.426220 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.427069 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.427100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.427114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.427669 4793 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca" exitCode=0 Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.427701 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca"} Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.427820 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.429610 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.429646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:12 crc kubenswrapper[4793]: I0130 13:43:12.429656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:13 crc kubenswrapper[4793]: W0130 13:43:13.079869 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:13 crc kubenswrapper[4793]: E0130 13:43:13.080154 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.287194 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.299188 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 09:19:07.659785066 +0000 UTC Jan 30 13:43:13 crc kubenswrapper[4793]: E0130 13:43:13.306258 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="3.2s" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.433287 4793 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506" exitCode=0 Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.433362 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506"} Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.433457 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.434424 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.434465 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.434482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.437416 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"138ad071341d45922e6b30ca8d58f26e60c6ab9f407f70fd3b7a61bd7cef446d"} Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.437440 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.438622 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.438679 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.438699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.439735 4793 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8" exitCode=0 Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.439950 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.440232 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8"} Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.441041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.441127 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.441144 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.445502 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1"} Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.445550 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca"} Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.448224 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec" exitCode=0 Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.448274 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec"} Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.449188 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.451314 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.451375 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.451397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.454723 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.456791 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.456861 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.456897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.575242 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.577386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.577425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.577438 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:13 crc kubenswrapper[4793]: I0130 13:43:13.577470 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 13:43:13 crc kubenswrapper[4793]: E0130 13:43:13.577986 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.2:6443: connect: connection refused" node="crc" Jan 30 13:43:13 crc kubenswrapper[4793]: W0130 13:43:13.582533 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:13 crc kubenswrapper[4793]: E0130 13:43:13.582612 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:43:14 crc kubenswrapper[4793]: W0130 13:43:14.229602 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:14 crc kubenswrapper[4793]: E0130 13:43:14.229673 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:43:14 crc kubenswrapper[4793]: W0130 13:43:14.251568 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:14 crc kubenswrapper[4793]: E0130 13:43:14.251628 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.286569 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.299606 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 14:41:01.478407934 +0000 UTC Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.456405 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294"} Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.456526 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.457503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.457534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.457546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.459221 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6"} Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.462378 4793 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e" exitCode=0 Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.462487 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.462469 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e"} Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.463812 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.463880 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.463909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.464981 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1"} Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.465025 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.465657 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.465688 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:14 crc kubenswrapper[4793]: I0130 13:43:14.465698 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:15 crc kubenswrapper[4793]: I0130 13:43:15.287520 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:15 crc kubenswrapper[4793]: I0130 13:43:15.300737 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:35:57.330968991 +0000 UTC Jan 30 13:43:15 crc kubenswrapper[4793]: I0130 13:43:15.469234 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044"} Jan 30 13:43:15 crc kubenswrapper[4793]: I0130 13:43:15.469472 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef"} Jan 30 13:43:15 crc kubenswrapper[4793]: I0130 13:43:15.469301 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:15 crc kubenswrapper[4793]: I0130 13:43:15.470693 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:15 crc kubenswrapper[4793]: I0130 13:43:15.470725 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:15 crc kubenswrapper[4793]: I0130 13:43:15.470735 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:15 crc kubenswrapper[4793]: I0130 13:43:15.472906 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690"} Jan 30 13:43:15 crc kubenswrapper[4793]: I0130 13:43:15.473065 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995"} Jan 30 13:43:15 crc kubenswrapper[4793]: I0130 13:43:15.476120 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:15 crc kubenswrapper[4793]: I0130 13:43:15.476106 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"31716686e8eff95a71aca86f4d29b9f0a7e5aed74428b1bceb266273a571fa3f"} Jan 30 13:43:15 crc kubenswrapper[4793]: I0130 13:43:15.476284 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9b6dcda3f2706461a36af85ad53e425262bfc3c0ecc47d37b8cb69d908830645"} Jan 30 13:43:15 crc kubenswrapper[4793]: I0130 13:43:15.476936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:15 crc kubenswrapper[4793]: I0130 13:43:15.476969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:15 crc kubenswrapper[4793]: I0130 13:43:15.476981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:15 crc kubenswrapper[4793]: I0130 13:43:15.811880 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.068211 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.286848 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.300981 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:01:03.081858107 +0000 UTC Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.483782 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"927e5087e2d7755f5eda8cac47915d186b89d2be6b19dac4c5246e1b14f5df13"} Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.483858 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a315e5a682045e2d27391e25293e5427a27df424debb83fc338515a48ef4ada4"} Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.483886 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3cbec632a964cfe1b95a67579e0f8be3bffe1af19e50940cca4f04b1397d8fdb"} Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.483895 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.484941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.484980 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.484992 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.487388 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ad92971cceae3d9cf75d1d1e68209c1c214fc2d070e69e4f9435cb07579a96de"} Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.487441 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01"} Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.487450 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.487528 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.487533 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.488109 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.488266 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.488290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.488299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.488558 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.488606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.488623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.488694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.488715 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.488726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:16 crc kubenswrapper[4793]: E0130 13:43:16.507299 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="6.4s" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.547523 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 13:43:16 crc kubenswrapper[4793]: E0130 13:43:16.548652 4793 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.702232 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.778792 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.780249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.780295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.780313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.780349 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 13:43:16 crc kubenswrapper[4793]: E0130 13:43:16.780827 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.2:6443: connect: connection refused" node="crc" Jan 30 13:43:16 crc kubenswrapper[4793]: I0130 13:43:16.889199 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.287029 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.302096 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 03:09:29.972238342 +0000 UTC Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.490487 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.492026 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ad92971cceae3d9cf75d1d1e68209c1c214fc2d070e69e4f9435cb07579a96de" exitCode=255 Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.492098 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ad92971cceae3d9cf75d1d1e68209c1c214fc2d070e69e4f9435cb07579a96de"} Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.492141 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.492156 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.492204 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.493122 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.493136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.493124 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.493156 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.493143 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.493156 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.493170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.493163 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.493172 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:17 crc kubenswrapper[4793]: I0130 13:43:17.494141 4793 scope.go:117] "RemoveContainer" containerID="ad92971cceae3d9cf75d1d1e68209c1c214fc2d070e69e4f9435cb07579a96de" Jan 30 13:43:17 crc kubenswrapper[4793]: W0130 13:43:17.862382 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:17 crc kubenswrapper[4793]: E0130 13:43:17.862476 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:43:17 crc kubenswrapper[4793]: W0130 13:43:17.997919 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:43:17 crc kubenswrapper[4793]: E0130 13:43:17.998040 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.025466 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.303347 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 10:58:05.445364858 +0000 UTC Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.496895 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.498232 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.498700 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.499004 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506"} Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.499142 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.507041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.507113 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.507123 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.507267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.507325 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.507342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.507811 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.507845 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.507856 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.812191 4793 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 13:43:18 crc kubenswrapper[4793]: I0130 13:43:18.812279 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.177308 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.238895 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.304101 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:11:30.911456829 +0000 UTC Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.501591 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.501666 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.501851 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.503472 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.503544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.503565 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.504338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.504543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.504613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.529577 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.634956 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.635424 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.636815 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.636875 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.636892 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:19 crc kubenswrapper[4793]: I0130 13:43:19.776361 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:20 crc kubenswrapper[4793]: I0130 13:43:20.305002 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 02:30:21.678381018 +0000 UTC Jan 30 13:43:20 crc kubenswrapper[4793]: E0130 13:43:20.472745 4793 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 13:43:20 crc kubenswrapper[4793]: I0130 13:43:20.503422 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:20 crc kubenswrapper[4793]: I0130 13:43:20.503829 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:20 crc kubenswrapper[4793]: I0130 13:43:20.504089 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:20 crc kubenswrapper[4793]: I0130 13:43:20.504133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:20 crc kubenswrapper[4793]: I0130 13:43:20.504142 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:20 crc kubenswrapper[4793]: I0130 13:43:20.505300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:20 crc kubenswrapper[4793]: I0130 13:43:20.505316 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:20 crc kubenswrapper[4793]: I0130 13:43:20.505324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:20 crc kubenswrapper[4793]: I0130 13:43:20.605622 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:43:20 crc kubenswrapper[4793]: I0130 13:43:20.605956 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:20 crc kubenswrapper[4793]: I0130 13:43:20.606895 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:20 crc kubenswrapper[4793]: I0130 13:43:20.606983 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:20 crc kubenswrapper[4793]: I0130 13:43:20.607080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:21 crc kubenswrapper[4793]: I0130 13:43:21.305875 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:02:02.646447002 +0000 UTC Jan 30 13:43:21 crc kubenswrapper[4793]: I0130 13:43:21.506325 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:21 crc kubenswrapper[4793]: I0130 13:43:21.507620 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:21 crc kubenswrapper[4793]: I0130 13:43:21.507659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:21 crc kubenswrapper[4793]: I0130 13:43:21.507671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:22 crc kubenswrapper[4793]: I0130 13:43:22.306661 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:39:02.06703525 +0000 UTC Jan 30 13:43:23 crc kubenswrapper[4793]: I0130 13:43:23.181364 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:23 crc kubenswrapper[4793]: I0130 13:43:23.182590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:23 crc kubenswrapper[4793]: I0130 13:43:23.182615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:23 crc kubenswrapper[4793]: I0130 13:43:23.182624 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:23 crc kubenswrapper[4793]: I0130 13:43:23.182643 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 13:43:23 crc kubenswrapper[4793]: I0130 13:43:23.307303 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 02:46:58.655962086 +0000 UTC Jan 30 13:43:24 crc kubenswrapper[4793]: I0130 13:43:24.308574 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 14:15:27.728708266 +0000 UTC Jan 30 13:43:24 crc kubenswrapper[4793]: I0130 13:43:24.920383 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 13:43:25 crc kubenswrapper[4793]: I0130 13:43:25.309034 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:28:02.86078319 +0000 UTC Jan 30 13:43:26 crc kubenswrapper[4793]: I0130 13:43:26.309521 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 10:51:19.105591038 +0000 UTC Jan 30 13:43:26 crc kubenswrapper[4793]: I0130 13:43:26.752276 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 13:43:26 crc kubenswrapper[4793]: I0130 13:43:26.752423 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:26 crc kubenswrapper[4793]: I0130 13:43:26.754554 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:26 crc kubenswrapper[4793]: I0130 13:43:26.754605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:26 crc kubenswrapper[4793]: I0130 13:43:26.754620 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:26 crc kubenswrapper[4793]: I0130 13:43:26.772443 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 13:43:26 crc kubenswrapper[4793]: I0130 13:43:26.868216 4793 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 13:43:26 crc kubenswrapper[4793]: I0130 13:43:26.868575 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 13:43:26 crc kubenswrapper[4793]: I0130 13:43:26.876380 4793 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 13:43:26 crc kubenswrapper[4793]: I0130 13:43:26.876659 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 13:43:26 crc kubenswrapper[4793]: I0130 13:43:26.902365 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:26 crc kubenswrapper[4793]: I0130 13:43:26.902513 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:26 crc kubenswrapper[4793]: I0130 13:43:26.903481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:26 crc kubenswrapper[4793]: I0130 13:43:26.903520 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:26 crc kubenswrapper[4793]: I0130 13:43:26.903530 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:27 crc kubenswrapper[4793]: I0130 13:43:27.309595 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 23:22:22.706102245 +0000 UTC Jan 30 13:43:27 crc kubenswrapper[4793]: I0130 13:43:27.521137 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:27 crc kubenswrapper[4793]: I0130 13:43:27.522564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:27 crc kubenswrapper[4793]: I0130 13:43:27.522619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:27 crc kubenswrapper[4793]: I0130 13:43:27.522639 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:28 crc kubenswrapper[4793]: I0130 13:43:28.026615 4793 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 13:43:28 crc kubenswrapper[4793]: I0130 13:43:28.027273 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 13:43:28 crc kubenswrapper[4793]: I0130 13:43:28.310628 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 07:30:37.443410268 +0000 UTC Jan 30 13:43:28 crc kubenswrapper[4793]: I0130 13:43:28.812590 4793 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 13:43:28 crc kubenswrapper[4793]: I0130 13:43:28.812948 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 13:43:29 crc kubenswrapper[4793]: I0130 13:43:29.311092 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 22:24:48.671928813 +0000 UTC Jan 30 13:43:29 crc kubenswrapper[4793]: I0130 13:43:29.565017 4793 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 13:43:29 crc kubenswrapper[4793]: I0130 13:43:29.779914 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:29 crc kubenswrapper[4793]: I0130 13:43:29.780079 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:29 crc kubenswrapper[4793]: I0130 13:43:29.780885 4793 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 13:43:29 crc kubenswrapper[4793]: I0130 13:43:29.780957 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 13:43:29 crc kubenswrapper[4793]: I0130 13:43:29.781326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:29 crc kubenswrapper[4793]: I0130 13:43:29.781373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:29 crc kubenswrapper[4793]: I0130 13:43:29.781383 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:29 crc kubenswrapper[4793]: I0130 13:43:29.784450 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:30 crc kubenswrapper[4793]: I0130 13:43:30.311743 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:38:48.033459473 +0000 UTC Jan 30 13:43:30 crc kubenswrapper[4793]: E0130 13:43:30.472852 4793 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 13:43:30 crc kubenswrapper[4793]: I0130 13:43:30.526787 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:30 crc kubenswrapper[4793]: I0130 13:43:30.527164 4793 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 13:43:30 crc kubenswrapper[4793]: I0130 13:43:30.527233 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 13:43:30 crc kubenswrapper[4793]: I0130 13:43:30.527889 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:30 crc kubenswrapper[4793]: I0130 13:43:30.527926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:30 crc kubenswrapper[4793]: I0130 13:43:30.527961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:31 crc kubenswrapper[4793]: I0130 13:43:31.312668 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:31:46.279668972 +0000 UTC Jan 30 13:43:31 crc kubenswrapper[4793]: I0130 13:43:31.857625 4793 trace.go:236] Trace[1018025099]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 13:43:19.331) (total time: 12525ms): Jan 30 13:43:31 crc kubenswrapper[4793]: Trace[1018025099]: ---"Objects listed" error: 12525ms (13:43:31.857) Jan 30 13:43:31 crc kubenswrapper[4793]: Trace[1018025099]: [12.52582542s] [12.52582542s] END Jan 30 13:43:31 crc kubenswrapper[4793]: I0130 13:43:31.857663 4793 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 13:43:31 crc kubenswrapper[4793]: I0130 13:43:31.859468 4793 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 13:43:31 crc kubenswrapper[4793]: I0130 13:43:31.860602 4793 trace.go:236] Trace[885684309]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 13:43:18.545) (total time: 13315ms): Jan 30 13:43:31 crc kubenswrapper[4793]: Trace[885684309]: ---"Objects listed" error: 13315ms (13:43:31.860) Jan 30 13:43:31 crc kubenswrapper[4793]: Trace[885684309]: [13.315383353s] [13.315383353s] END Jan 30 13:43:31 crc kubenswrapper[4793]: I0130 13:43:31.860631 4793 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 13:43:31 crc kubenswrapper[4793]: E0130 13:43:31.861224 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 13:43:31 crc kubenswrapper[4793]: I0130 13:43:31.884375 4793 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 13:43:31 crc kubenswrapper[4793]: I0130 13:43:31.913452 4793 csr.go:261] certificate signing request csr-l8s42 is approved, waiting to be issued Jan 30 13:43:31 crc kubenswrapper[4793]: I0130 13:43:31.924637 4793 csr.go:257] certificate signing request csr-l8s42 is issued Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.312962 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 04:03:23.19085244 +0000 UTC Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.475737 4793 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.532249 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.532843 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.534304 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506" exitCode=255 Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.534346 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506"} Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.534381 4793 scope.go:117] "RemoveContainer" containerID="ad92971cceae3d9cf75d1d1e68209c1c214fc2d070e69e4f9435cb07579a96de" Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.572632 4793 scope.go:117] "RemoveContainer" containerID="da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506" Jan 30 13:43:32 crc kubenswrapper[4793]: E0130 13:43:32.572866 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.826857 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.925381 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 13:38:31 +0000 UTC, rotation deadline is 2026-11-15 20:09:28.321009213 +0000 UTC Jan 30 13:43:32 crc kubenswrapper[4793]: I0130 13:43:32.925425 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6942h25m55.395587828s for next certificate rotation Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.279561 4793 apiserver.go:52] "Watching apiserver" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.282082 4793 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.282529 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-mbqcp","openshift-multus/multus-2ssnl","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-ovn-kubernetes/ovnkube-node-g62p5","openshift-kube-apiserver/kube-apiserver-crc","openshift-machine-config-operator/machine-config-daemon-rdsch","openshift-multus/multus-additional-cni-plugins-nsxfs"] Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.283511 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.283799 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.283925 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.284178 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.284245 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.284281 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.284419 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.284456 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.284539 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.284592 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.284652 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mbqcp" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.284957 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.285195 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.285460 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.286484 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.291471 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.291564 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.291687 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.291710 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.293581 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.293904 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.294005 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.294571 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.297087 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.297539 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.297552 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.299854 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.299896 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.299910 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.299849 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300010 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300017 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300037 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300134 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300161 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300216 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300219 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300256 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300408 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300512 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300589 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300713 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300831 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.300989 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.301722 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.308997 4793 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.313250 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:59:54.45176892 +0000 UTC Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.320240 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.338192 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.349923 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.359359 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368347 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368388 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368403 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368420 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368437 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368451 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368465 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368479 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368493 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368506 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368520 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368534 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368548 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368564 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368582 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368596 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368609 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368625 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368638 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368653 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368666 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368689 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368709 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368723 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368737 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368750 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368764 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368777 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368795 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368810 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368825 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368839 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368854 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368867 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368888 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368902 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368915 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368929 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368944 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368957 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368973 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.368986 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369000 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369019 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369033 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369069 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369085 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369099 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369112 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369128 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369142 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369156 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369170 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369186 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369202 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369216 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369230 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369243 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369256 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369270 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369285 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369299 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369313 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369335 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369350 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369364 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369378 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369392 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369411 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369426 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369441 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369462 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369478 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369494 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369509 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369524 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369538 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369553 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369568 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369583 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369597 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369611 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369626 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369641 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369656 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369672 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369688 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369704 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369719 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369734 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369750 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369764 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369780 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369795 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369811 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369827 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369842 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369859 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369873 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369888 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369902 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369917 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369936 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369952 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369977 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370003 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370024 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370062 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370082 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370097 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370114 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370129 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370143 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370158 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370173 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370186 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370201 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370219 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370236 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370252 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370267 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370283 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370300 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370315 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370331 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370346 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370363 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370379 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370395 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370410 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370426 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370443 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370458 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370473 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370489 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370508 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370527 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370542 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370557 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370572 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370590 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370605 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370621 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370637 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370653 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370669 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370685 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370701 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370718 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370734 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370751 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370766 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370782 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370777 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370799 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370814 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370830 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370846 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370861 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370879 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370895 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370911 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370927 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370942 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370959 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370975 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.370991 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371009 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371027 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371063 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371079 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371089 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371096 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371145 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371277 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371301 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371329 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371352 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371378 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371401 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371420 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371441 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371463 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371487 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371511 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371534 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371554 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371576 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371597 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371619 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371641 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371662 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371682 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371703 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371725 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371745 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371763 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371825 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-netd\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371855 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371879 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpthl\" (UniqueName: \"kubernetes.io/projected/4a60502c-d692-40e5-bbb7-d07aaaf80f10-kube-api-access-xpthl\") pod \"node-resolver-mbqcp\" (UID: \"4a60502c-d692-40e5-bbb7-d07aaaf80f10\") " pod="openshift-dns/node-resolver-mbqcp" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371904 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-kubelet\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371915 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.372163 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.372445 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.372809 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.372851 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.373036 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.373074 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.373471 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.373723 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.373799 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.373940 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.374180 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.374335 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.374366 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.374557 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.374576 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.374756 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.374942 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375040 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375264 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375467 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375506 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375539 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375547 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375557 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375668 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375707 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375785 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375861 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.375910 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.376095 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.376163 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.376270 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.376391 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.377139 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.377552 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.377936 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.378236 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.378379 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.378492 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.378702 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.378900 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.379107 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.379337 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.379535 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.380360 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.380576 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.380825 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.381009 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.381228 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.381475 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.381609 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.381681 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.381975 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.382815 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.383023 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.383756 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.384376 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.384681 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.385746 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.386112 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.386372 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.386558 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.386762 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.386843 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.386866 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.387280 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.387297 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.387408 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.387560 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.387409 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.387740 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.387876 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.387976 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.388236 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.388260 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.388615 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.388872 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.389109 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.389132 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.389157 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.389972 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.390715 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.391022 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.391116 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.391311 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.391555 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.391801 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.394233 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.394505 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.394910 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.395512 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.371924 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-ovn\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.395724 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-config\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396392 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396440 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396489 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-multus-certs\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396517 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-slash\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396559 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-log-socket\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396582 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjsp7\" (UniqueName: \"kubernetes.io/projected/f9dad744-dcef-4c9e-88b1-3d8d935794a4-kube-api-access-mjsp7\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396602 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-system-cni-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396654 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396680 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-etc-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396839 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397605 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-socket-dir-parent\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397644 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-bin\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397672 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397695 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-cni-bin\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397719 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-conf-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397739 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxgc5\" (UniqueName: \"kubernetes.io/projected/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-kube-api-access-kxgc5\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397760 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397777 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-systemd\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397792 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-os-release\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.396925 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397082 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397082 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397162 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397232 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397255 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397302 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397791 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.397986 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398072 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398092 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398207 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-node-log\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398236 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398260 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-cni-binary-copy\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398297 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-netns\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398319 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-kubelet\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398346 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398235 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398399 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a60502c-d692-40e5-bbb7-d07aaaf80f10-hosts-file\") pod \"node-resolver-mbqcp\" (UID: \"4a60502c-d692-40e5-bbb7-d07aaaf80f10\") " pod="openshift-dns/node-resolver-mbqcp" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398402 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398423 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-var-lib-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398501 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398522 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398534 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cnibin\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398644 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-cni-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398667 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-cnibin\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.398683 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-cni-multus\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.399304 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.399901 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.400558 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.369894 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.400961 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.401317 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.401499 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.401716 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.401922 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.402287 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.402843 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.402991 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.403132 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.405224 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.405359 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.406461 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.407310 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.407362 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.407415 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.407699 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.407936 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.412660 4793 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.432384 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.438366 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.438529 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.438612 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.438805 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.438863 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.439000 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.439202 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.439270 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.439555 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.439606 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.440204 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.440621 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.442401 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.444493 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.444663 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-mcd-auth-proxy-config\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.444781 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-systemd-units\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.444873 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-netns\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.444967 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-script-lib\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.445081 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cni-binary-copy\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.445179 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-rootfs\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.445288 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-env-overrides\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.445381 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-system-cni-dir\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.445471 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451440 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-k8s-cni-cncf-io\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451494 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovn-node-metrics-cert\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451539 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451572 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-hostroot\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451603 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451632 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-os-release\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451662 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f6pg\" (UniqueName: \"kubernetes.io/projected/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-kube-api-access-2f6pg\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451691 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451720 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451747 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-ovn-kubernetes\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451769 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8km7w\" (UniqueName: \"kubernetes.io/projected/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-kube-api-access-8km7w\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451797 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451823 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-daemon-config\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451855 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451885 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451912 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-etc-kubernetes\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451940 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-proxy-tls\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452118 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452138 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452152 4793 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452169 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452189 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452203 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452217 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452235 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452249 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452262 4793 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452276 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452293 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452307 4793 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452321 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452335 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452352 4793 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452368 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452382 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452400 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452414 4793 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452428 4793 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452451 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452468 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452482 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452495 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452509 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452526 4793 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452540 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452554 4793 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452572 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452590 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452603 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452618 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452634 4793 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452648 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452663 4793 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452685 4793 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452702 4793 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452715 4793 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452730 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452745 4793 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452761 4793 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452774 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452792 4793 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452806 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452825 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452840 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452854 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452873 4793 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452887 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452900 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452913 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452928 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452942 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452955 4793 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452969 4793 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.452986 4793 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.453001 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.453015 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.453032 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.464811 4793 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.446262 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.446379 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.446428 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.446477 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.464913 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.464927 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.447037 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.447138 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.448680 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.450504 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.450708 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.450783 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.450873 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.451373 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.453406 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.453469 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:33.953433524 +0000 UTC m=+24.654782015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.449393 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.465385 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.465817 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.466002 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.466022 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.466224 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.466305 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.466523 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.466550 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.466564 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.466958 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.467156 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.467261 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.467782 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.468495 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.469891 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.470892 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.454016 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.454706 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.454772 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.454831 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.455157 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.455181 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.455492 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.455717 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.455753 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.455978 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.456033 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.456197 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.456629 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.457338 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.464265 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.464616 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.464727 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.471136 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:43:33.971112634 +0000 UTC m=+24.672461135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.471314 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:33.971302389 +0000 UTC m=+24.672650890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.471507 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.473140 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.473256 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:33.973236886 +0000 UTC m=+24.674585377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.473937 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.473965 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.473975 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.473984 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.473993 4793 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474004 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474014 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474024 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474036 4793 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474060 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474070 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474079 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474089 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474098 4793 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474107 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474118 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474132 4793 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474143 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474176 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474188 4793 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474201 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474212 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474224 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474235 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474243 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474251 4793 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474260 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474271 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474280 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474289 4793 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474298 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474309 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474317 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474326 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474334 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474344 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474352 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474361 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474371 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474379 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474388 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474396 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474407 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474416 4793 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474424 4793 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474432 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474443 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474451 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474459 4793 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474469 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474477 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474487 4793 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474495 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474506 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474515 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.474524 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.478581 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.479296 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.479560 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.481648 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.485557 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.486676 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.492523 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.492602 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.492640 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.492860 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.493256 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.500296 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.500331 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.500345 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.500400 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:34.000383127 +0000 UTC m=+24.701731618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.515355 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.520023 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.528562 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.548405 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad92971cceae3d9cf75d1d1e68209c1c214fc2d070e69e4f9435cb07579a96de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:16Z\\\",\\\"message\\\":\\\"W0130 13:43:16.323216 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 13:43:16.323625 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769780596 cert, and key in /tmp/serving-cert-3571744094/serving-signer.crt, /tmp/serving-cert-3571744094/serving-signer.key\\\\nI0130 13:43:16.518841 1 observer_polling.go:159] Starting file observer\\\\nW0130 13:43:16.523129 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 13:43:16.523353 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:16.524369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3571744094/tls.crt::/tmp/serving-cert-3571744094/tls.key\\\\\\\"\\\\nF0130 13:43:16.810880 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.551530 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.576401 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-hostroot\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.576664 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovn-node-metrics-cert\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.576761 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-os-release\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.576839 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f6pg\" (UniqueName: \"kubernetes.io/projected/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-kube-api-access-2f6pg\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.576914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.576992 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-daemon-config\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577113 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577218 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-ovn-kubernetes\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577304 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8km7w\" (UniqueName: \"kubernetes.io/projected/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-kube-api-access-8km7w\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577384 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-etc-kubernetes\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.577443 4793 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577461 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-proxy-tls\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577600 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-netd\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577681 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpthl\" (UniqueName: \"kubernetes.io/projected/4a60502c-d692-40e5-bbb7-d07aaaf80f10-kube-api-access-xpthl\") pod \"node-resolver-mbqcp\" (UID: \"4a60502c-d692-40e5-bbb7-d07aaaf80f10\") " pod="openshift-dns/node-resolver-mbqcp" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577762 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-config\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577835 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-hostroot\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577913 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-kubelet\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577989 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-ovn\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578092 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578163 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-ovn-kubernetes\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578250 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-multus-certs\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578335 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-log-socket\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578414 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjsp7\" (UniqueName: \"kubernetes.io/projected/f9dad744-dcef-4c9e-88b1-3d8d935794a4-kube-api-access-mjsp7\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578480 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-etc-kubernetes\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577397 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578620 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-system-cni-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578702 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-slash\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578783 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-etc-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578861 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578970 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-socket-dir-parent\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.579093 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-conf-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577691 4793 scope.go:117] "RemoveContainer" containerID="da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.579294 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-bin\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.579405 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.579442 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.579856 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-os-release\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.579890 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-kubelet\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.577814 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580088 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-config\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.579187 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-bin\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580379 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-cni-bin\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580471 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxgc5\" (UniqueName: \"kubernetes.io/projected/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-kube-api-access-kxgc5\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580541 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-multus-certs\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580499 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-daemon-config\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580514 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-slash\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580524 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-log-socket\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578133 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-netd\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580630 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-system-cni-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580686 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-cni-bin\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580704 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-socket-dir-parent\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580707 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-conf-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580769 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-etc-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.578811 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-ovn\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580984 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-systemd\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581090 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-os-release\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581188 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-cni-binary-copy\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581266 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-netns\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581355 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-kubelet\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581439 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581509 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-os-release\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.580478 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581573 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-systemd\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581602 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-netns\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581685 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-node-log\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581770 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581906 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a60502c-d692-40e5-bbb7-d07aaaf80f10-hosts-file\") pod \"node-resolver-mbqcp\" (UID: \"4a60502c-d692-40e5-bbb7-d07aaaf80f10\") " pod="openshift-dns/node-resolver-mbqcp" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.581986 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-cnibin\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582072 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-node-log\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582122 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a60502c-d692-40e5-bbb7-d07aaaf80f10-hosts-file\") pod \"node-resolver-mbqcp\" (UID: \"4a60502c-d692-40e5-bbb7-d07aaaf80f10\") " pod="openshift-dns/node-resolver-mbqcp" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582142 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-kubelet\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582148 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582174 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-cnibin\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582333 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-cni-multus\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582422 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-mcd-auth-proxy-config\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582509 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-var-lib-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582604 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cnibin\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582681 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-cni-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582758 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-rootfs\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582837 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-systemd-units\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582972 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-netns\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583103 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-script-lib\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583203 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cni-binary-copy\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583289 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-k8s-cni-cncf-io\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583375 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-env-overrides\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583466 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-system-cni-dir\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583559 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583706 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583795 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583867 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.583939 4793 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584008 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584102 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584174 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584243 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584325 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584408 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584491 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584563 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584645 4793 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584702 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-run-k8s-cni-cncf-io\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584718 4793 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584769 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584781 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584791 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584801 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584812 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584821 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584829 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584838 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584848 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584857 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584866 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584876 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584885 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584893 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584902 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584910 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584919 4793 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584927 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584935 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584944 4793 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584947 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-mcd-auth-proxy-config\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584971 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-systemd-units\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584285 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-host-var-lib-cni-multus\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584953 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585003 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585017 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585030 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585043 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585077 4793 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585090 4793 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585103 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585115 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585128 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585140 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585152 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585165 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585177 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585189 4793 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585200 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585212 4793 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585223 4793 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585236 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585248 4793 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585261 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585273 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585285 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585297 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585308 4793 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585320 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585332 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585345 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585358 4793 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585371 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585384 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585398 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585412 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585424 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585435 4793 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585448 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585459 4793 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585468 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585478 4793 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585489 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585492 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-script-lib\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585501 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585515 4793 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585527 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585003 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-netns\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585579 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-var-lib-openvswitch\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585610 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cnibin\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585658 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-multus-cni-dir\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585689 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585723 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f9dad744-dcef-4c9e-88b1-3d8d935794a4-system-cni-dir\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.584253 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-rootfs\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.582031 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-cni-binary-copy\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.585865 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f9dad744-dcef-4c9e-88b1-3d8d935794a4-cni-binary-copy\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.587871 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-env-overrides\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.590579 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovn-node-metrics-cert\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.592541 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-proxy-tls\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.604294 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.622147 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.631659 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpthl\" (UniqueName: \"kubernetes.io/projected/4a60502c-d692-40e5-bbb7-d07aaaf80f10-kube-api-access-xpthl\") pod \"node-resolver-mbqcp\" (UID: \"4a60502c-d692-40e5-bbb7-d07aaaf80f10\") " pod="openshift-dns/node-resolver-mbqcp" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.631918 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.632629 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8km7w\" (UniqueName: \"kubernetes.io/projected/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-kube-api-access-8km7w\") pod \"ovnkube-node-g62p5\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.633892 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f6pg\" (UniqueName: \"kubernetes.io/projected/f59a12e8-194c-4874-a9ef-2fc58c18fbbe-kube-api-access-2f6pg\") pod \"machine-config-daemon-rdsch\" (UID: \"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\") " pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.636230 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mbqcp" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.640613 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.641238 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.641969 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjsp7\" (UniqueName: \"kubernetes.io/projected/f9dad744-dcef-4c9e-88b1-3d8d935794a4-kube-api-access-mjsp7\") pod \"multus-additional-cni-plugins-nsxfs\" (UID: \"f9dad744-dcef-4c9e-88b1-3d8d935794a4\") " pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.644029 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxgc5\" (UniqueName: \"kubernetes.io/projected/3e8d16db-eb58-4895-8c24-47d6f12b1ea4-kube-api-access-kxgc5\") pod \"multus-2ssnl\" (UID: \"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\") " pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.648227 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2ssnl" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.656275 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.661342 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.676619 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: W0130 13:43:33.683650 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-731b4709f3f6678be66b16e755d2d8f8debdc9f716e1f6cbc598201980ee2a52 WatchSource:0}: Error finding container 731b4709f3f6678be66b16e755d2d8f8debdc9f716e1f6cbc598201980ee2a52: Status 404 returned error can't find the container with id 731b4709f3f6678be66b16e755d2d8f8debdc9f716e1f6cbc598201980ee2a52 Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.688566 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad92971cceae3d9cf75d1d1e68209c1c214fc2d070e69e4f9435cb07579a96de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:16Z\\\",\\\"message\\\":\\\"W0130 13:43:16.323216 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 13:43:16.323625 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769780596 cert, and key in /tmp/serving-cert-3571744094/serving-signer.crt, /tmp/serving-cert-3571744094/serving-signer.key\\\\nI0130 13:43:16.518841 1 observer_polling.go:159] Starting file observer\\\\nW0130 13:43:16.523129 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 13:43:16.523353 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:16.524369 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3571744094/tls.crt::/tmp/serving-cert-3571744094/tls.key\\\\\\\"\\\\nF0130 13:43:16.810880 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: W0130 13:43:33.688896 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a60502c_d692_40e5_bbb7_d07aaaf80f10.slice/crio-d5ae127e0c112232517505b3ed7827ba25c6e126bafb5f0c5a8d1a0d646cd70b WatchSource:0}: Error finding container d5ae127e0c112232517505b3ed7827ba25c6e126bafb5f0c5a8d1a0d646cd70b: Status 404 returned error can't find the container with id d5ae127e0c112232517505b3ed7827ba25c6e126bafb5f0c5a8d1a0d646cd70b Jan 30 13:43:33 crc kubenswrapper[4793]: W0130 13:43:33.694459 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e8d16db_eb58_4895_8c24_47d6f12b1ea4.slice/crio-ef125e9b2e327da265b22b82b1e4814fd706963ee20814b27cf83602bbc4e5dc WatchSource:0}: Error finding container ef125e9b2e327da265b22b82b1e4814fd706963ee20814b27cf83602bbc4e5dc: Status 404 returned error can't find the container with id ef125e9b2e327da265b22b82b1e4814fd706963ee20814b27cf83602bbc4e5dc Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.698232 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.707769 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.720655 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.741862 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.757802 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.785384 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.806041 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.822714 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.838449 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.859483 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.869273 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.876212 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.885469 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.902100 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.914265 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.914277 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.923869 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: W0130 13:43:33.944014 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9dad744_dcef_4c9e_88b1_3d8d935794a4.slice/crio-a9b874b7613ae7ee9b60270e026bedc8c0a2614d0e9cafd7164ed92899b2cbb0 WatchSource:0}: Error finding container a9b874b7613ae7ee9b60270e026bedc8c0a2614d0e9cafd7164ed92899b2cbb0: Status 404 returned error can't find the container with id a9b874b7613ae7ee9b60270e026bedc8c0a2614d0e9cafd7164ed92899b2cbb0 Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.944237 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.956002 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.966023 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.990013 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990162 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:43:34.990141974 +0000 UTC m=+25.691490465 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.990218 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.990299 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990305 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: I0130 13:43:33.990319 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990358 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:34.990342018 +0000 UTC m=+25.691690509 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990435 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990482 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:34.990469421 +0000 UTC m=+25.691817912 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990537 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990548 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990558 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:33 crc kubenswrapper[4793]: E0130 13:43:33.990580 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:34.990574384 +0000 UTC m=+25.691922875 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.090732 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:34 crc kubenswrapper[4793]: E0130 13:43:34.090870 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:34 crc kubenswrapper[4793]: E0130 13:43:34.090888 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:34 crc kubenswrapper[4793]: E0130 13:43:34.090903 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:34 crc kubenswrapper[4793]: E0130 13:43:34.090951 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:35.0909364 +0000 UTC m=+25.792284911 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.313692 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:45:02.916346742 +0000 UTC Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.405336 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.406717 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.407886 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.409241 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.409937 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.410961 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.411687 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.412301 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.413682 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.414326 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.416650 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.417663 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.420660 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.421590 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.422208 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.422752 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.423446 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.423892 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.424542 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.425245 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.425913 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.426514 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.427022 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.427781 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.428339 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.429006 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.429718 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.432795 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.433730 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.434748 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.435348 4793 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.435467 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.437364 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.438345 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.438845 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.440500 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.441686 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.442296 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.443335 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.444109 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.447114 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.447808 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.448917 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.450010 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.450934 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.451523 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.452477 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.453479 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.454507 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.454996 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.455921 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.456567 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.457188 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.458359 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.572359 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ssnl" event={"ID":"3e8d16db-eb58-4895-8c24-47d6f12b1ea4","Type":"ContainerStarted","Data":"9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.572593 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ssnl" event={"ID":"3e8d16db-eb58-4895-8c24-47d6f12b1ea4","Type":"ContainerStarted","Data":"ef125e9b2e327da265b22b82b1e4814fd706963ee20814b27cf83602bbc4e5dc"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.574144 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"731b4709f3f6678be66b16e755d2d8f8debdc9f716e1f6cbc598201980ee2a52"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.575693 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mbqcp" event={"ID":"4a60502c-d692-40e5-bbb7-d07aaaf80f10","Type":"ContainerStarted","Data":"e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.575796 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mbqcp" event={"ID":"4a60502c-d692-40e5-bbb7-d07aaaf80f10","Type":"ContainerStarted","Data":"d5ae127e0c112232517505b3ed7827ba25c6e126bafb5f0c5a8d1a0d646cd70b"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.577316 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9" exitCode=0 Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.577383 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.577400 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"483688d83c9fd52a9c7106da5a4bf9f5c29a0ecb4d0a52164165da4e2be17cc3"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.578914 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.578945 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b78b13fed81582e751949091b34bc98c1de835dea70c0882797ffd3ec8f682ae"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.580129 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.580156 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.580168 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"520a78a684ca7b518512886e458b462273f9a3705d5f3e6d09790db4204d11ca"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.581743 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.581771 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.581783 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"416a0a57299aa5cb5d7980a5b1d9c2f1f627d9e500c87db6a82e042106ade790"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.583466 4793 scope.go:117] "RemoveContainer" containerID="da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506" Jan 30 13:43:34 crc kubenswrapper[4793]: E0130 13:43:34.583704 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.583846 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerStarted","Data":"4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.583923 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerStarted","Data":"a9b874b7613ae7ee9b60270e026bedc8c0a2614d0e9cafd7164ed92899b2cbb0"} Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.585738 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.596374 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.606550 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.618662 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.629103 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.645951 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.655617 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.664115 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.685161 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.700197 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.709720 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.720540 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.732257 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.757974 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.774293 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.788419 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.799971 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.816329 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.828012 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.836772 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.845999 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.856465 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.871705 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.881194 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:43:34 crc kubenswrapper[4793]: I0130 13:43:34.999831 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:34.999944 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:34.999974 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:43:36.999952974 +0000 UTC m=+27.701301455 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.000002 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.000033 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.000009 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.000125 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:37.000117208 +0000 UTC m=+27.701465709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.000136 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.000088 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.000289 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.000325 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.000172 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:37.000162229 +0000 UTC m=+27.701510730 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.000424 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:37.000390015 +0000 UTC m=+27.701738566 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.101571 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.101714 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.101730 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.101740 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.101780 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:37.101767985 +0000 UTC m=+27.803116466 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.313874 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:46:48.67389587 +0000 UTC Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.397444 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.397512 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.397523 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.397558 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.397592 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:35 crc kubenswrapper[4793]: E0130 13:43:35.397655 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.586352 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9dad744-dcef-4c9e-88b1-3d8d935794a4" containerID="4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f" exitCode=0 Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.586437 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerDied","Data":"4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f"} Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.591443 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.591491 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.591502 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.591512 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.591521 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.591531 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.601838 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.617509 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.632734 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.648337 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.661576 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.674450 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.687413 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.704487 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.725568 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.744958 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.761028 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.783644 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.819173 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.823219 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.825809 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.835730 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.851635 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.874616 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.892128 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.907314 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.918759 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.929659 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.943728 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.957303 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.968759 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.983867 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:35 crc kubenswrapper[4793]: I0130 13:43:35.994699 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:35Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.011461 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.024620 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.041195 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.053487 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.078185 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.096614 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.109373 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.120025 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.132732 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.146714 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.157643 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.170848 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.185552 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.314185 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:03:26.683976757 +0000 UTC Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.595237 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerStarted","Data":"d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f"} Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.597488 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab"} Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.612040 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.627524 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.657875 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.672030 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.693112 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.705713 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.720780 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.753861 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.779264 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.799514 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.814669 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.826254 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.840425 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.856237 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.867776 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.880866 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.891933 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.911720 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.923911 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.938783 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.950113 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.966465 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.979676 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:36 crc kubenswrapper[4793]: I0130 13:43:36.990613 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.006492 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.019343 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.019436 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.019465 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.019499 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.019615 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.019662 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:41.019649877 +0000 UTC m=+31.720998368 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.019975 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:43:41.019966194 +0000 UTC m=+31.721314685 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.020072 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.020087 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.020097 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.020118 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:41.020112078 +0000 UTC m=+31.721460569 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.020155 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.020174 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:41.020168819 +0000 UTC m=+31.721517310 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.021587 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.120262 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.120440 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.120474 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.120486 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.120550 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:41.120535515 +0000 UTC m=+31.821884006 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.315135 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:50:50.253154544 +0000 UTC Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.397971 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.398033 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.398127 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.397990 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.398228 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:37 crc kubenswrapper[4793]: E0130 13:43:37.398328 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.600949 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9dad744-dcef-4c9e-88b1-3d8d935794a4" containerID="d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f" exitCode=0 Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.601012 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerDied","Data":"d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f"} Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.625283 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.644212 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.673831 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.686583 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.696860 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.706770 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.716716 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.729008 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.742468 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.754596 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.768668 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.783001 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.796226 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.922752 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-pxcll"] Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.923144 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.925279 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.925965 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.926395 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.927012 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.935524 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.949568 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.965909 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.978468 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.988995 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:37 crc kubenswrapper[4793]: I0130 13:43:37.997865 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:37Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.008138 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.018584 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.028905 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34045014-77ce-47a5-9a21-a69d9f8cab72-host\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.028935 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g5hv\" (UniqueName: \"kubernetes.io/projected/34045014-77ce-47a5-9a21-a69d9f8cab72-kube-api-access-2g5hv\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.028966 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/34045014-77ce-47a5-9a21-a69d9f8cab72-serviceca\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.031190 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.042007 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.054140 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.068795 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.080952 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.099484 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.129720 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g5hv\" (UniqueName: \"kubernetes.io/projected/34045014-77ce-47a5-9a21-a69d9f8cab72-kube-api-access-2g5hv\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.129796 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/34045014-77ce-47a5-9a21-a69d9f8cab72-serviceca\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.129850 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34045014-77ce-47a5-9a21-a69d9f8cab72-host\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.129923 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34045014-77ce-47a5-9a21-a69d9f8cab72-host\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.131288 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/34045014-77ce-47a5-9a21-a69d9f8cab72-serviceca\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.153111 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g5hv\" (UniqueName: \"kubernetes.io/projected/34045014-77ce-47a5-9a21-a69d9f8cab72-kube-api-access-2g5hv\") pod \"node-ca-pxcll\" (UID: \"34045014-77ce-47a5-9a21-a69d9f8cab72\") " pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.236819 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pxcll" Jan 30 13:43:38 crc kubenswrapper[4793]: W0130 13:43:38.252968 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34045014_77ce_47a5_9a21_a69d9f8cab72.slice/crio-29a19415c18336ae54469f1508a1c6a9ebbd5983035cc16b278443e3cb65d7ae WatchSource:0}: Error finding container 29a19415c18336ae54469f1508a1c6a9ebbd5983035cc16b278443e3cb65d7ae: Status 404 returned error can't find the container with id 29a19415c18336ae54469f1508a1c6a9ebbd5983035cc16b278443e3cb65d7ae Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.315432 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:12:12.210687211 +0000 UTC Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.608263 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.611386 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9dad744-dcef-4c9e-88b1-3d8d935794a4" containerID="96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d" exitCode=0 Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.611460 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerDied","Data":"96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.613449 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pxcll" event={"ID":"34045014-77ce-47a5-9a21-a69d9f8cab72","Type":"ContainerStarted","Data":"087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.613475 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pxcll" event={"ID":"34045014-77ce-47a5-9a21-a69d9f8cab72","Type":"ContainerStarted","Data":"29a19415c18336ae54469f1508a1c6a9ebbd5983035cc16b278443e3cb65d7ae"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.647535 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.664980 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.689354 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.707332 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.724206 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.738696 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.755260 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.768521 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.781769 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.793741 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.805801 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.820998 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.835223 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.852842 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.862355 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.864403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.864625 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.864826 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.865187 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.865606 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.873686 4793 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.873872 4793 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.874636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.874656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.874664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.874677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.874685 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:38Z","lastTransitionTime":"2026-01-30T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.886324 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: E0130 13:43:38.893105 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.896359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.896394 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.896404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.896420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.896431 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:38Z","lastTransitionTime":"2026-01-30T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.898599 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: E0130 13:43:38.911545 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.911912 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.915196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.915239 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.915251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.915269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.915282 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:38Z","lastTransitionTime":"2026-01-30T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:38 crc kubenswrapper[4793]: E0130 13:43:38.930173 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.933591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.933629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.933641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.933662 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.933674 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:38Z","lastTransitionTime":"2026-01-30T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.946564 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: E0130 13:43:38.949389 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.952287 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.952359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.952380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.952405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.952429 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:38Z","lastTransitionTime":"2026-01-30T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.961404 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: E0130 13:43:38.965934 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: E0130 13:43:38.966082 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.970847 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.970874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.970883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.970896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.970906 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:38Z","lastTransitionTime":"2026-01-30T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.978482 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:38 crc kubenswrapper[4793]: I0130 13:43:38.992165 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.029604 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.066684 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.073303 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.073341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.073351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.073366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.073378 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.109130 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.151328 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.175153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.175182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.175190 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.175202 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.175211 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.193956 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.233561 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.277589 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.277624 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.277635 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.277651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.277661 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.316631 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 11:14:08.13287826 +0000 UTC Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.380605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.380651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.380660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.380676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.380688 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.397630 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:39 crc kubenswrapper[4793]: E0130 13:43:39.397760 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.398243 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:39 crc kubenswrapper[4793]: E0130 13:43:39.398333 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.398406 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:39 crc kubenswrapper[4793]: E0130 13:43:39.398481 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.483406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.483519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.483543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.483571 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.483595 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.589159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.589245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.589280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.589310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.589330 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.619083 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerDied","Data":"31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.619077 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9dad744-dcef-4c9e-88b1-3d8d935794a4" containerID="31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13" exitCode=0 Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.644145 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.657147 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.669322 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.682471 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.692329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.692372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.692386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.692404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.692417 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.704417 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.716314 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.726663 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.735261 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.744003 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.753746 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.764795 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.774988 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.786210 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.794319 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.794350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.794362 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.794378 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.794388 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.804567 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:39Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.896623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.896653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.896663 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.896677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:39 crc kubenswrapper[4793]: I0130 13:43:39.896687 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:39Z","lastTransitionTime":"2026-01-30T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.003030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.003068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.003078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.003092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.003100 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.064198 4793 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.107153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.107181 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.107189 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.107202 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.107211 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.210344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.210460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.210477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.210498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.210512 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.312663 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.312959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.312969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.312984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.312994 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.316950 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:09:24.300044507 +0000 UTC Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.412893 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.415485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.415523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.415534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.415550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.415561 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.431459 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.448569 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.462648 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.478092 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.489671 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.505226 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.522304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.522349 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.522360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.522376 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.522387 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.551300 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.571253 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.591406 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.605270 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.618961 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.624189 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.624221 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.624231 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.624245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.624256 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.627738 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.629815 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerStarted","Data":"d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.635996 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.654870 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.667758 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.679849 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.690112 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.709697 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.726619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.726661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.726673 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.726689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.726701 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.727786 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.740516 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.754071 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.769397 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.804928 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.823593 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.829088 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.829122 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.829134 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.829149 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.829161 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.841770 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.857420 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.872165 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.911929 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.931633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.931667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.931676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.931692 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:40 crc kubenswrapper[4793]: I0130 13:43:40.931702 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:40Z","lastTransitionTime":"2026-01-30T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.034294 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.034339 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.034356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.034377 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.034391 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.059368 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.059490 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.059525 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.059567 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.059648 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.059704 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:49.059688847 +0000 UTC m=+39.761037338 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.060203 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.060221 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.060227 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:43:49.060216351 +0000 UTC m=+39.761564842 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.060241 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.060246 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:49.060240711 +0000 UTC m=+39.761589202 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.060254 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.060297 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:49.060285832 +0000 UTC m=+39.761634333 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.136976 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.137017 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.137030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.137066 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.137089 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.171367 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.171510 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.171534 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.171546 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.171595 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:49.171582934 +0000 UTC m=+39.872931425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.239589 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.239651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.239676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.239704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.239725 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.317318 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 07:02:22.258654664 +0000 UTC Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.342041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.342100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.342113 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.342130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.342142 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.397465 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.397591 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.397761 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.397953 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.398063 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:41 crc kubenswrapper[4793]: E0130 13:43:41.398624 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.445159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.445206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.445217 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.445235 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.445247 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.547027 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.547296 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.547360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.547429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.547488 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.634025 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.649876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.649909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.649922 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.649964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.649976 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.661086 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.674523 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.676317 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.689656 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.701250 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.714503 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.730608 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.744835 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.752380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.752431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.752440 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.752463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.752473 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.758220 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.770472 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.787313 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.800011 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.812216 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.827964 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.848126 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.854767 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.854794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.854803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.854816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.854826 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.864971 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.876333 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.898324 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.920397 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.935553 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.947230 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.957785 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.958250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.958290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.958307 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.958327 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.958343 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:41Z","lastTransitionTime":"2026-01-30T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.968864 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.987432 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:41 crc kubenswrapper[4793]: I0130 13:43:41.998823 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.011690 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.024510 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.036957 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.048659 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.060704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.060747 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.060759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.060778 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.060792 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.163162 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.163194 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.163204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.163218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.163229 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.266162 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.266193 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.266204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.266219 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.266230 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.317822 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 23:32:03.169814322 +0000 UTC Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.368369 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.368402 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.368428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.368443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.368454 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.471927 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.471959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.471969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.471983 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.471994 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.575447 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.575518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.575541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.575565 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.575582 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.640384 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9dad744-dcef-4c9e-88b1-3d8d935794a4" containerID="d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97" exitCode=0 Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.640839 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.641208 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerDied","Data":"d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.641514 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.660668 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.670987 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.675486 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.678476 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.678512 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.678520 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.678536 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.678545 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.688855 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.701349 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.713982 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.724857 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.733994 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.744255 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.764815 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.774115 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.780687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.780709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.780717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.780729 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.780738 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.787241 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.800924 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.812773 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.824754 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.837750 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.848340 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.861024 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.875475 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.884913 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.884951 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.884964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.884980 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.884991 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.889844 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.901718 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.910707 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.921608 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.958253 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.988074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.988118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.988126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.988142 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.988151 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:42Z","lastTransitionTime":"2026-01-30T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:42 crc kubenswrapper[4793]: I0130 13:43:42.992662 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.030889 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.069919 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.089956 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.089994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.090007 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.090023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.090034 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.114233 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.150295 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.192456 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.192497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.192505 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.192521 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.192530 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.294934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.294980 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.294993 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.295011 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.295024 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.318576 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 03:55:13.140489339 +0000 UTC Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.397279 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.397313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.397321 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.397335 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.397343 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.397596 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.397643 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:43 crc kubenswrapper[4793]: E0130 13:43:43.397710 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.397775 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:43 crc kubenswrapper[4793]: E0130 13:43:43.397977 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:43 crc kubenswrapper[4793]: E0130 13:43:43.397998 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.499780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.499831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.499843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.499861 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.499874 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.602075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.602131 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.602145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.602166 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.602189 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.646940 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9dad744-dcef-4c9e-88b1-3d8d935794a4" containerID="3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866" exitCode=0 Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.647106 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.647783 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerDied","Data":"3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.669631 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.688080 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.698461 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.704552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.704591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.704602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.704619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.704630 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.714094 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.727031 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.737730 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.753144 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.766551 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.778953 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.791269 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.801272 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.806535 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.806567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.806576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.806596 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.806607 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.813565 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.825957 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.841575 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.909302 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.909333 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.909343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.909357 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:43 crc kubenswrapper[4793]: I0130 13:43:43.909367 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:43Z","lastTransitionTime":"2026-01-30T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.010965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.010998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.011006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.011021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.011032 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.113036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.113131 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.113140 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.113154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.113163 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.215382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.215448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.215457 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.215472 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.215481 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.317509 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.317550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.317559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.317573 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.317583 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.319669 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 12:37:54.265400165 +0000 UTC Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.419351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.419615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.419623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.419636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.419644 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.521601 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.521656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.521667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.521691 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.521707 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.595640 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr"] Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.596113 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.597757 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.597969 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.609797 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.623769 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.623799 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.623808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.623820 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.623831 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.625876 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.638969 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.651421 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" event={"ID":"f9dad744-dcef-4c9e-88b1-3d8d935794a4","Type":"ContainerStarted","Data":"1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.651485 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.653515 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.662632 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.681763 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.699435 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.704540 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.704599 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.704643 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.704763 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lsdl\" (UniqueName: \"kubernetes.io/projected/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-kube-api-access-5lsdl\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.711030 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.725351 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.727282 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.727348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.727365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.727387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.727402 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.741823 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.754852 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.766030 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.777132 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.789466 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.801702 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.805647 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lsdl\" (UniqueName: \"kubernetes.io/projected/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-kube-api-access-5lsdl\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.805681 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.805722 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.805752 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.806484 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.807304 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.812831 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.815179 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.822461 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lsdl\" (UniqueName: \"kubernetes.io/projected/37d5d2ac-8c00-4221-8af9-ed9e5bea8a01-kube-api-access-5lsdl\") pod \"ovnkube-control-plane-749d76644c-hb9pr\" (UID: \"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.827345 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.830558 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.830582 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.830598 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.830613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.830623 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.838527 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.849682 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.860404 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.873069 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.883856 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.895750 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.907847 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.908066 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.922858 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.932531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.932560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.932572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.932588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.932601 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:44Z","lastTransitionTime":"2026-01-30T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.940220 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.960647 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.974267 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:44 crc kubenswrapper[4793]: I0130 13:43:44.988366 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.007468 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.035450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.035490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.035501 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.035522 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.035533 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.137726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.137765 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.137775 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.137791 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.137804 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.241176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.241203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.241211 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.241224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.241234 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.320704 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:19:30.962127846 +0000 UTC Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.343536 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.343760 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.343773 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.343786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.343795 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.397238 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:45 crc kubenswrapper[4793]: E0130 13:43:45.397610 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.398107 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.398289 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:45 crc kubenswrapper[4793]: E0130 13:43:45.398454 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:45 crc kubenswrapper[4793]: E0130 13:43:45.398578 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.446653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.446685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.446697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.446712 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.446724 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.548337 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.548381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.548392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.548409 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.548418 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.651293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.651324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.651333 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.651349 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.651360 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.661679 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/0.log" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.664455 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52" exitCode=1 Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.664523 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.665502 4793 scope.go:117] "RemoveContainer" containerID="d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.665725 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" event={"ID":"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01","Type":"ContainerStarted","Data":"07c07021edcccf8ce4d7cc581816d1ce648b86a1379f988ab98458bd8d7c53bd"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.687926 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.703428 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.718453 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.731641 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.740780 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.753459 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.753494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.753507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.753523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.753533 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.754588 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.766448 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.778980 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.794156 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.810689 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.829851 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.839988 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.848472 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.855955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.856164 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.856240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.856313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.856372 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.860230 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.878163 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.959277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.959342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.959358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.959784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:45 crc kubenswrapper[4793]: I0130 13:43:45.959840 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:45Z","lastTransitionTime":"2026-01-30T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.054224 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xfcvw"] Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.054696 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:46 crc kubenswrapper[4793]: E0130 13:43:46.054769 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.063539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.063583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.063660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.063688 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.063702 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.075532 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.088220 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.099341 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.112825 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.119095 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.119165 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl5wx\" (UniqueName: \"kubernetes.io/projected/3401bbdc-090b-402b-bf7b-a4a823182946-kube-api-access-cl5wx\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.135015 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.147540 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.161544 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.166847 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.166998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.167104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.167196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.167274 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.176572 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.191401 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.205250 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.218712 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.220300 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.220421 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl5wx\" (UniqueName: \"kubernetes.io/projected/3401bbdc-090b-402b-bf7b-a4a823182946-kube-api-access-cl5wx\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:46 crc kubenswrapper[4793]: E0130 13:43:46.220559 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:46 crc kubenswrapper[4793]: E0130 13:43:46.220689 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs podName:3401bbdc-090b-402b-bf7b-a4a823182946 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:46.720656287 +0000 UTC m=+37.422004828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs") pod "network-metrics-daemon-xfcvw" (UID: "3401bbdc-090b-402b-bf7b-a4a823182946") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.235128 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.241499 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl5wx\" (UniqueName: \"kubernetes.io/projected/3401bbdc-090b-402b-bf7b-a4a823182946-kube-api-access-cl5wx\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.247875 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.257620 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.270155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.270203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.270216 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.270236 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.270250 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.275222 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.295131 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.321316 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 21:37:54.276331783 +0000 UTC Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.372299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.372507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.372612 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.372704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.372777 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.398826 4793 scope.go:117] "RemoveContainer" containerID="da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.475549 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.475848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.476071 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.476234 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.476375 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.579316 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.579612 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.579775 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.579929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.580182 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.675745 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" event={"ID":"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01","Type":"ContainerStarted","Data":"d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.682533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.682572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.682583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.682600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.682612 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.726362 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:46 crc kubenswrapper[4793]: E0130 13:43:46.726523 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:46 crc kubenswrapper[4793]: E0130 13:43:46.726588 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs podName:3401bbdc-090b-402b-bf7b-a4a823182946 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:47.726571658 +0000 UTC m=+38.427920159 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs") pod "network-metrics-daemon-xfcvw" (UID: "3401bbdc-090b-402b-bf7b-a4a823182946") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.785654 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.785697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.785707 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.785722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.785733 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.887543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.887582 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.887590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.887606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.887614 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.990522 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.990567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.990577 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.990591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:46 crc kubenswrapper[4793]: I0130 13:43:46.990601 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:46Z","lastTransitionTime":"2026-01-30T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.092591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.092635 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.092645 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.092662 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.092671 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.194719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.194757 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.194766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.194780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.194790 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.297256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.297285 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.297294 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.297334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.297346 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.322105 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 05:33:00.850173464 +0000 UTC Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.397408 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.397433 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.397513 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.397509 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:47 crc kubenswrapper[4793]: E0130 13:43:47.397608 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:47 crc kubenswrapper[4793]: E0130 13:43:47.397688 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:43:47 crc kubenswrapper[4793]: E0130 13:43:47.397767 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:47 crc kubenswrapper[4793]: E0130 13:43:47.397866 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.399518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.399552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.399566 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.399582 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.399594 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.501300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.501342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.501355 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.501371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.501383 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.603895 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.604316 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.604336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.604356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.604370 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.685564 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/0.log" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.689669 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.706648 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.707187 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.707199 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.707218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.707242 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.737710 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:47 crc kubenswrapper[4793]: E0130 13:43:47.737903 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:47 crc kubenswrapper[4793]: E0130 13:43:47.737991 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs podName:3401bbdc-090b-402b-bf7b-a4a823182946 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:49.737969677 +0000 UTC m=+40.439318248 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs") pod "network-metrics-daemon-xfcvw" (UID: "3401bbdc-090b-402b-bf7b-a4a823182946") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.809761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.809803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.809813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.809827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.809837 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.911711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.911747 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.911756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.911775 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:47 crc kubenswrapper[4793]: I0130 13:43:47.911785 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:47Z","lastTransitionTime":"2026-01-30T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.014882 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.014944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.014965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.014989 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.015006 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.117228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.117287 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.117303 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.117328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.117348 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.219761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.219800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.219809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.219824 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.219834 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.322368 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:24:04.869969193 +0000 UTC Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.323674 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.323718 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.323732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.323750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.323767 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.427354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.427567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.427664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.427736 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.427800 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.530096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.530400 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.530468 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.530543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.530600 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.632923 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.633578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.633668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.633778 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.633990 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.693958 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.695482 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.696130 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.698509 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.699247 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" event={"ID":"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01","Type":"ContainerStarted","Data":"f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.727345 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.744673 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.744713 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.744724 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.744741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.744752 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.747439 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.763473 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.773514 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.785763 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.796798 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.810520 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.821265 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.833076 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.846566 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.846599 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.846608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.846623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.846633 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.850500 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.858842 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.867023 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.878345 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.888901 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.898519 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.908446 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.920062 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.935820 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.946450 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.948797 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.948840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.948850 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.948868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.948879 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:48Z","lastTransitionTime":"2026-01-30T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.960219 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.981211 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:48 crc kubenswrapper[4793]: I0130 13:43:48.994481 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.006890 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.019685 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.036430 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.051339 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.051564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.051649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.051735 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.051785 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.051819 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.064340 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.076904 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.092386 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.106283 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.118539 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.134165 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.151318 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.151431 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.151468 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.151486 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.151582 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.151626 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:05.15161408 +0000 UTC m=+55.852962571 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.151945 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.151970 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.151992 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.152063 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:44:05.1520154 +0000 UTC m=+55.853363881 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.152127 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:05.152109142 +0000 UTC m=+55.853457653 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.152247 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.152448 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:05.15242718 +0000 UTC m=+55.853775671 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.153576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.153606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.153615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.153630 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.153642 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.200378 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.200441 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.200453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.200469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.200479 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.212881 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.216418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.216458 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.216468 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.216482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.216493 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.229692 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.233404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.233437 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.233448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.233465 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.233475 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.248616 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.251262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.251372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.251450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.252320 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.252385 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.251844 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.251955 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.252726 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.252737 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.252781 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:05.252767066 +0000 UTC m=+55.954115557 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.264691 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.268367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.268523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.268634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.268701 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.268770 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.282006 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.282173 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.283650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.283694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.283705 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.283722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.283737 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.323238 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 03:12:47.723174589 +0000 UTC Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.386419 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.386465 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.386474 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.386490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.386501 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.397999 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.398075 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.398157 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.398010 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.398294 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.398336 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.398380 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.398426 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.489521 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.489969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.490075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.490165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.490263 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.592188 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.592225 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.592236 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.592251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.592263 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.694683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.695008 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.695143 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.695235 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.695344 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.715862 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.730085 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.745774 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.756733 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.756925 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: E0130 13:43:49.756990 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs podName:3401bbdc-090b-402b-bf7b-a4a823182946 nodeName:}" failed. No retries permitted until 2026-01-30 13:43:53.756973453 +0000 UTC m=+44.458322014 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs") pod "network-metrics-daemon-xfcvw" (UID: "3401bbdc-090b-402b-bf7b-a4a823182946") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.759754 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.770188 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.783014 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.797957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.798235 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.798347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.798476 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.798657 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.802260 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.812950 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.825284 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.840977 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.854580 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.867976 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.877766 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.889330 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.901215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.901253 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.901262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.901689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.901705 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:49Z","lastTransitionTime":"2026-01-30T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.904884 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:49 crc kubenswrapper[4793]: I0130 13:43:49.915802 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.004264 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.004312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.004324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.004350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.004361 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.106015 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.106086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.106100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.106139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.106152 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.208960 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.209009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.209022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.209040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.209075 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.311754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.311794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.311805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.311818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.311828 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.324293 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:22:55.449154957 +0000 UTC Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.413011 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.413886 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.413932 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.413949 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.413964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.413980 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.426104 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.436452 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.446114 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.470618 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.487020 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.499586 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.515079 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.516508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.516559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.516570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.516587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.516598 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.529248 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.541695 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.551072 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.562171 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.572925 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.582936 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.593571 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.607646 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.618972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.618998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.619008 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.619023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.619035 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.720557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.720594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.720605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.720621 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.720633 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.822886 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.823271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.823363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.823455 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.823554 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.926200 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.926487 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.926557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.926634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:50 crc kubenswrapper[4793]: I0130 13:43:50.926706 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:50Z","lastTransitionTime":"2026-01-30T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.029331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.029622 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.029707 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.029800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.029907 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.131814 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.131857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.131865 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.131880 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.131890 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.234525 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.234604 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.234619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.234638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.235337 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.255725 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.325318 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:13:44.236169012 +0000 UTC Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.338066 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.338103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.338113 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.338130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.338143 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.397923 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.397973 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.398025 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:51 crc kubenswrapper[4793]: E0130 13:43:51.398087 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.398135 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:51 crc kubenswrapper[4793]: E0130 13:43:51.398261 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:51 crc kubenswrapper[4793]: E0130 13:43:51.398369 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:51 crc kubenswrapper[4793]: E0130 13:43:51.398466 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.441386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.441683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.441757 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.441839 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.441912 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.544258 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.544508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.544615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.544709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.544791 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.647275 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.647323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.647334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.647350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.647363 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.750312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.750361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.750375 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.750397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.750410 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.853584 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.853651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.853673 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.853704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.853728 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.955749 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.955789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.955807 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.955829 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:51 crc kubenswrapper[4793]: I0130 13:43:51.955846 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:51Z","lastTransitionTime":"2026-01-30T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.058030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.058349 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.058440 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.058510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.058599 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.161568 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.161626 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.161642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.161664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.161680 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.264346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.264421 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.264434 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.264454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.264467 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.325696 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 06:28:16.692696062 +0000 UTC Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.366832 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.367154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.367257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.367359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.367441 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.469819 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.470102 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.470114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.470133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.470150 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.573192 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.573244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.573256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.573272 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.573286 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.676311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.676403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.676416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.676431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.676440 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.779284 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.779319 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.779328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.779342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.779351 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.882337 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.882376 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.882387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.882400 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.882409 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.984446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.984484 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.984493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.984510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:52 crc kubenswrapper[4793]: I0130 13:43:52.984519 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:52Z","lastTransitionTime":"2026-01-30T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.087023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.087072 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.087085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.087100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.087111 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.193599 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.193633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.193641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.193656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.193674 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.296241 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.296282 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.296292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.296305 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.296314 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.326697 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:01:14.492607975 +0000 UTC Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.397291 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:53 crc kubenswrapper[4793]: E0130 13:43:53.397441 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.397500 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:53 crc kubenswrapper[4793]: E0130 13:43:53.397667 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.397500 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:53 crc kubenswrapper[4793]: E0130 13:43:53.397806 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.397952 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:53 crc kubenswrapper[4793]: E0130 13:43:53.398190 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.398293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.398462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.398573 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.398744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.398878 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.501928 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.501994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.502013 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.502040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.502096 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.604896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.605250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.605339 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.605443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.605535 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.708277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.708354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.708376 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.708404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.708424 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.801549 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:53 crc kubenswrapper[4793]: E0130 13:43:53.801923 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:53 crc kubenswrapper[4793]: E0130 13:43:53.802193 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs podName:3401bbdc-090b-402b-bf7b-a4a823182946 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:01.802164549 +0000 UTC m=+52.503513080 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs") pod "network-metrics-daemon-xfcvw" (UID: "3401bbdc-090b-402b-bf7b-a4a823182946") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.811139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.811182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.811199 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.811223 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.811240 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.913917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.913950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.913961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.913975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:53 crc kubenswrapper[4793]: I0130 13:43:53.913986 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:53Z","lastTransitionTime":"2026-01-30T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.015940 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.015994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.016006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.016023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.016067 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.118070 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.118098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.118108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.118122 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.118132 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.221076 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.221112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.221123 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.221137 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.221149 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.323719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.323771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.323781 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.323805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.323818 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.327197 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:30:05.65750785 +0000 UTC Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.427014 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.427107 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.427118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.427132 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.427166 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.530039 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.530369 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.530387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.530410 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.530431 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.633020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.633091 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.633102 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.633117 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.633128 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.736207 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.736260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.736269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.736284 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.736297 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.839682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.839740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.839762 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.839792 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.839814 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.942443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.942484 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.942493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.942508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:54 crc kubenswrapper[4793]: I0130 13:43:54.942519 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:54Z","lastTransitionTime":"2026-01-30T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.045486 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.045535 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.045545 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.045564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.045574 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.147946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.148238 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.148310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.148399 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.148484 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.251129 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.251598 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.251762 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.251931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.252120 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.327888 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:12:38.336491925 +0000 UTC Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.354691 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.354955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.355126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.355256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.355377 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.398128 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.398265 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.398297 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.398318 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:55 crc kubenswrapper[4793]: E0130 13:43:55.399196 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:55 crc kubenswrapper[4793]: E0130 13:43:55.399317 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:55 crc kubenswrapper[4793]: E0130 13:43:55.399421 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:43:55 crc kubenswrapper[4793]: E0130 13:43:55.399493 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.458009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.458083 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.458100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.458122 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.458136 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.561360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.561510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.561541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.561598 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.561622 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.665257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.665321 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.665330 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.665346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.665356 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.768784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.768841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.768860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.768885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.768902 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.872040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.872111 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.872123 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.872139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.872153 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.975533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.975594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.975613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.975637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:55 crc kubenswrapper[4793]: I0130 13:43:55.975653 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:55Z","lastTransitionTime":"2026-01-30T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.078639 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.078822 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.078842 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.078903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.078921 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.182103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.182145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.182155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.182171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.182181 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.285094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.285157 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.285172 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.285195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.285252 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.328784 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:05:03.48515477 +0000 UTC Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.388169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.388204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.388212 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.388225 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.388233 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.490361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.490395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.490406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.490420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.490432 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.592723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.592771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.592787 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.592810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.592826 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.695717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.695768 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.695785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.695805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.695819 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.798417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.798460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.798472 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.798490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.798501 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.901534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.901566 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.901578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.901592 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:56 crc kubenswrapper[4793]: I0130 13:43:56.901601 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:56Z","lastTransitionTime":"2026-01-30T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.004328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.004375 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.004390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.004408 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.004423 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.107503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.107566 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.107581 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.107607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.107623 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.211641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.211722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.211740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.211765 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.211782 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.315934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.315976 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.315987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.316005 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.316016 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.329456 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:55:35.229159097 +0000 UTC Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.397617 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.397716 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.397782 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.397724 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:57 crc kubenswrapper[4793]: E0130 13:43:57.397912 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:43:57 crc kubenswrapper[4793]: E0130 13:43:57.398037 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:57 crc kubenswrapper[4793]: E0130 13:43:57.398242 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:57 crc kubenswrapper[4793]: E0130 13:43:57.398473 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.419249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.419299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.419497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.419539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.419557 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.522706 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.522766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.522782 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.522805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.522823 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.625688 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.625769 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.625789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.625817 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.625838 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.729524 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.729883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.730090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.730289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.730472 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.833815 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.834016 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.834091 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.834125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.834148 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.936548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.936596 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.936607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.936623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:57 crc kubenswrapper[4793]: I0130 13:43:57.936636 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:57Z","lastTransitionTime":"2026-01-30T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.031380 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.040337 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.040429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.040443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.040463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.040478 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.053402 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.088901 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.113408 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.135861 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.142406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.142601 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.142699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.142806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.142926 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.154585 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.171381 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.187282 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.203786 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.214898 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.225250 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.238430 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.245451 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.245496 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.245505 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.245520 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.245530 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.254154 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.269315 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.280717 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.293317 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.310372 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:58Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.330628 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 23:19:32.911810827 +0000 UTC Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.347128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.347162 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.347172 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.347190 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.347200 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.450907 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.450968 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.450984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.451008 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.451027 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.553202 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.553444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.553555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.553643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.553713 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.657365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.657716 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.657985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.658261 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.658483 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.760885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.761306 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.761496 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.761693 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.761865 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.865018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.865133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.865182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.865205 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.865224 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.967984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.968038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.968076 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.968094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:58 crc kubenswrapper[4793]: I0130 13:43:58.968130 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:58Z","lastTransitionTime":"2026-01-30T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.071243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.071296 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.071317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.071346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.071367 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.174361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.174411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.174422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.174437 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.174448 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.278268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.278342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.278365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.278396 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.278419 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.297604 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.297669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.297691 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.297719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.297739 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.314315 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.318929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.318968 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.318979 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.318997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.319012 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.331374 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 15:59:39.453971504 +0000 UTC Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.335752 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.341130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.341221 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.341248 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.341279 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.341301 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.358848 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.364917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.364961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.364970 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.364983 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.364992 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.383007 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.387725 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.388856 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.388866 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.388882 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.388891 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.397345 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.397437 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.397465 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.397543 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.397378 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.397687 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.397816 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.397902 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.415716 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 13:43:59 crc kubenswrapper[4793]: E0130 13:43:59.415882 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.417980 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.418011 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.418022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.418040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.418069 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.519671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.519734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.519742 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.519756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.519764 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.622819 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.622851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.622859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.622872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.622881 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.725577 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.725638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.725649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.725667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.725681 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.827797 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.827877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.827903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.827932 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.827951 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.929935 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.929987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.930001 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.930023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:43:59 crc kubenswrapper[4793]: I0130 13:43:59.930037 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:43:59Z","lastTransitionTime":"2026-01-30T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.033136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.033195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.033215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.033244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.033265 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.136565 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.136614 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.136631 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.136650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.136664 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.240977 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.241030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.241078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.241099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.241112 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.332528 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 03:46:24.223670814 +0000 UTC Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.343379 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.343443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.343454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.343471 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.343481 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.409642 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.423642 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.439904 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.446273 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.446366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.446415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.446431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.446439 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.454223 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.472363 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.496234 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.509362 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.524837 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.537970 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.549260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.549303 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.549312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.549326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.549335 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.551405 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.565507 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.577195 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.590480 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.602873 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.612706 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.622542 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.624358 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.643074 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.652538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.652580 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.652590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.652605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.652617 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.655599 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.666775 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.678631 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.689301 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.700302 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.709997 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.720626 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.732501 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.743824 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.753262 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.754369 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.754393 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.754403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.754442 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.754452 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.765625 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.777677 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.795001 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.809996 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.821911 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.833640 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.855338 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.857161 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.857193 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.857245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.857265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.857277 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.959888 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.959944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.959954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.959967 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:00 crc kubenswrapper[4793]: I0130 13:44:00.959989 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:00Z","lastTransitionTime":"2026-01-30T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.062692 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.062734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.062776 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.062794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.062807 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.166623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.166701 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.166719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.166744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.166765 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.269121 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.269160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.269169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.269184 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.269194 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.333331 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:36:40.916979874 +0000 UTC Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.371243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.371283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.371295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.371312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.371324 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.398184 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.398241 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.398279 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:01 crc kubenswrapper[4793]: E0130 13:44:01.398321 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.398184 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:01 crc kubenswrapper[4793]: E0130 13:44:01.398548 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:01 crc kubenswrapper[4793]: E0130 13:44:01.398672 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:01 crc kubenswrapper[4793]: E0130 13:44:01.398774 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.485204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.485237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.485247 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.485263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.485278 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.588581 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.588653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.588667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.588690 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.588707 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.691513 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.691565 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.691583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.691607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.691624 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.794774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.794831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.794848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.794871 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.794888 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.888468 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:01 crc kubenswrapper[4793]: E0130 13:44:01.888658 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:44:01 crc kubenswrapper[4793]: E0130 13:44:01.889303 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs podName:3401bbdc-090b-402b-bf7b-a4a823182946 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:17.889276683 +0000 UTC m=+68.590625214 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs") pod "network-metrics-daemon-xfcvw" (UID: "3401bbdc-090b-402b-bf7b-a4a823182946") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.898437 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.898857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.899159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.899372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:01 crc kubenswrapper[4793]: I0130 13:44:01.899571 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:01Z","lastTransitionTime":"2026-01-30T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.002859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.002909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.002924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.002946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.002962 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.106662 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.106727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.106757 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.106779 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.106810 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.208766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.208810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.208821 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.208834 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.208843 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.311592 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.311634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.311645 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.311672 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.311682 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.334278 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:18:17.426581611 +0000 UTC Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.414695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.414750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.414762 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.414786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.414803 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.517245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.517277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.517286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.517300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.517310 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.620240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.620556 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.620746 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.620855 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.620926 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.723855 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.724199 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.724332 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.724450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.724565 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.827296 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.827329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.827342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.827357 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.827366 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.930679 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.931029 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.931263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.931461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:02 crc kubenswrapper[4793]: I0130 13:44:02.931634 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:02Z","lastTransitionTime":"2026-01-30T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.034477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.034704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.034784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.034863 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.034976 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.137343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.137397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.137411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.137430 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.137444 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.241224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.241268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.241280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.241316 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.241330 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.334680 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 22:31:47.357520775 +0000 UTC Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.343919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.344113 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.344239 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.344329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.344403 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.398319 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.398334 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.398441 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:03 crc kubenswrapper[4793]: E0130 13:44:03.398533 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.398591 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:03 crc kubenswrapper[4793]: E0130 13:44:03.398749 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:03 crc kubenswrapper[4793]: E0130 13:44:03.398906 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:03 crc kubenswrapper[4793]: E0130 13:44:03.398974 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.448232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.448604 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.448780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.448984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.449210 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.551561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.551607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.551619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.551637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.551649 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.654997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.655094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.655113 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.655139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.655158 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.757864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.757901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.757910 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.757925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.757935 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.859900 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.860224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.860296 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.860360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.860414 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.962763 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.962829 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.962840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.962856 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:03 crc kubenswrapper[4793]: I0130 13:44:03.962870 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:03Z","lastTransitionTime":"2026-01-30T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.068840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.068882 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.068893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.068910 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.068922 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.171845 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.171898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.171913 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.171929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.171941 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.274993 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.275037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.275074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.275094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.275111 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.335451 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:04:08.725203138 +0000 UTC Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.377237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.377263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.377271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.377283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.377292 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.479742 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.479805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.479818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.479834 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.479844 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.582429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.582479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.582494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.582515 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.582531 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.685323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.685371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.685385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.685405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.685420 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.787481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.787528 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.787540 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.787558 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.787571 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.890613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.890710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.890731 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.890756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.890773 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.993789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.993858 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.993875 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.993899 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:04 crc kubenswrapper[4793]: I0130 13:44:04.993916 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:04Z","lastTransitionTime":"2026-01-30T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.096398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.096431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.096440 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.096451 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.096478 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.204029 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.204117 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.204131 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.204149 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.204162 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.222102 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222271 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:44:37.222247223 +0000 UTC m=+87.923595714 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.222320 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.222351 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.222396 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222483 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222490 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222524 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222561 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222589 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:37.222579871 +0000 UTC m=+87.923928352 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222599 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222634 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:37.222612141 +0000 UTC m=+87.923960672 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.222707 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:37.222681053 +0000 UTC m=+87.924029584 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.306479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.306510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.306521 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.306534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.306543 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.323152 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.323376 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.323413 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.323428 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.323504 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:37.32348303 +0000 UTC m=+88.024831531 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.336771 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:15:23.055875607 +0000 UTC Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.397474 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.397585 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.397608 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.397634 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.397483 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.397723 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.397797 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:05 crc kubenswrapper[4793]: E0130 13:44:05.397876 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.408980 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.409056 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.409074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.409092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.409124 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.511243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.511288 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.511301 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.511318 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.511331 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.614868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.614945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.614984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.615015 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.615038 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.722037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.722185 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.722198 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.722214 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.722226 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.825821 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.825879 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.825895 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.825921 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.825939 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.928098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.928169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.928183 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.928210 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:05 crc kubenswrapper[4793]: I0130 13:44:05.928219 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:05Z","lastTransitionTime":"2026-01-30T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.030962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.031108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.031141 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.031170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.031191 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.133509 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.133552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.133563 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.133579 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.133592 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.236865 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.236909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.236924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.236949 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.236965 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.337278 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 04:19:47.088168386 +0000 UTC Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.339367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.339434 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.339455 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.339479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.339497 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.442104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.442144 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.442174 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.442189 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.442199 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.546606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.546673 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.546686 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.546701 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.546713 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.649537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.649618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.649639 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.649666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.649691 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.752481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.752532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.752563 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.752588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.752616 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.855391 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.855466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.855490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.855522 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.855544 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.958618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.958660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.958671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.958687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:06 crc kubenswrapper[4793]: I0130 13:44:06.958699 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:06Z","lastTransitionTime":"2026-01-30T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.061251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.061314 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.061331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.061355 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.061372 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.164331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.164385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.164403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.164428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.164444 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.267233 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.267283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.267301 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.267323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.267339 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.337916 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:34:29.675908603 +0000 UTC Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.369830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.369884 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.369897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.369916 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.369930 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.397272 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.397350 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:07 crc kubenswrapper[4793]: E0130 13:44:07.397502 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.397570 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.397565 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:07 crc kubenswrapper[4793]: E0130 13:44:07.397643 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:07 crc kubenswrapper[4793]: E0130 13:44:07.397727 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:07 crc kubenswrapper[4793]: E0130 13:44:07.397798 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.471972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.472011 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.472020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.472036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.472060 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.574472 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.574537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.574549 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.574563 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.574572 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.676965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.677031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.677077 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.677101 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.677117 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.778877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.778913 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.778926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.778941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.778953 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.881254 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.881322 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.881336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.881379 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.881393 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.983881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.983921 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.983931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.983945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:07 crc kubenswrapper[4793]: I0130 13:44:07.983956 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:07Z","lastTransitionTime":"2026-01-30T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.086334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.086369 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.086378 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.086390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.086399 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.188825 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.188898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.188921 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.188949 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.188970 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.291878 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.291947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.291968 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.291990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.292006 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.338117 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 12:32:35.834755046 +0000 UTC Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.394695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.394761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.394771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.394813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.394827 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.496815 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.496900 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.496927 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.496958 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.497001 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.599417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.599474 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.599485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.599500 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.599512 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.701325 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.701361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.701374 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.701387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.701396 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.766556 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/1.log" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.767167 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/0.log" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.768975 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f" exitCode=1 Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.769006 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.769036 4793 scope.go:117] "RemoveContainer" containerID="d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.769600 4793 scope.go:117] "RemoveContainer" containerID="ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f" Jan 30 13:44:08 crc kubenswrapper[4793]: E0130 13:44:08.769722 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.784706 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.797014 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.804572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.804593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.804602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.804616 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.804624 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.807678 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.818200 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.835447 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:08Z\\\",\\\"message\\\":\\\"found while processing openshift-etcd-operator/etcd-operator-b45778765-zrj8g: failed to check if pod openshift-etcd-operator/etcd-operator-b45778765-zrj8g is in primary UDN: could not find OVN pod annotation in map[]\\\\nI0130 13:44:08.535135 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-multus/multus-admission-controller-857f4d67dd-mnzcq: failed to check if pod openshift-multus/multus-admission-controller-857f4d67dd-mnzcq is in primary UDN: could not find OVN pod annotation in map[cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes:hosted-cluster-api-access]\\\\nI0130 13:44:08.535148 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-service-ca/service-ca-9c57cc56f-n9v6k: failed to check if pod openshift-service-ca/service-ca-9c57cc56f-n9v6k is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nE0130 13:44:08.602321 6172 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0130 13:44:08.603514 6172 ovnkube.go:599] Stopped ovnkube\\\\nI0130 13:44:08.603573 6172 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.847369 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.859515 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.871018 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.883509 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.894798 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.907234 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.907268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.907277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.907290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.907298 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:08Z","lastTransitionTime":"2026-01-30T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.908441 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.920426 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.933480 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.947706 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.959560 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.977538 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:08 crc kubenswrapper[4793]: I0130 13:44:08.992944 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.010373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.010498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.010669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.010816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.010953 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.113857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.113897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.113906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.113919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.113927 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.217600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.217657 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.217678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.217700 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.217716 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.321741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.321809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.321829 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.321858 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.321876 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.338305 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 11:33:39.847985784 +0000 UTC Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.397259 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.397346 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.397410 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.397528 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.397277 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.397622 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.397918 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.398208 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.424873 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.424919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.424930 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.424948 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.424961 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.527482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.527793 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.528085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.528198 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.528282 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.630382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.630418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.630427 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.630442 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.630451 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.733034 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.733390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.733546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.733657 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.733762 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.755105 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.755153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.755164 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.755181 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.755195 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.771590 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.774486 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/1.log" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.777216 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.777245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.777256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.777271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.777282 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.791760 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.795988 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.796036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.796076 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.796098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.796112 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.808987 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.812642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.812668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.812677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.812689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.812698 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.823632 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.826833 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.826977 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.827039 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.827147 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.827246 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.840212 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:09 crc kubenswrapper[4793]: E0130 13:44:09.840325 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.841643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.841671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.841695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.841711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.841722 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.944384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.944422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.944446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.944460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:09 crc kubenswrapper[4793]: I0130 13:44:09.944469 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:09Z","lastTransitionTime":"2026-01-30T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.047482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.047546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.047567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.047614 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.047636 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.151415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.151450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.151462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.151479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.151493 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.253846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.253914 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.253922 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.253935 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.253944 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.338681 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 09:43:27.26465699 +0000 UTC Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.356307 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.356375 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.356396 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.356423 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.356444 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.414357 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.430880 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.443733 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.458004 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.459695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.459734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.459751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.459771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.459783 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.486194 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d001ca0ba80224c29812ce724a841a4a054137e0529df1bff9b1febd1fc19f52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"message\\\":\\\"65 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.678895 5965 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:43:44.679377 5965 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679632 5965 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.679987 5965 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.680272 5965 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 13:43:44.681428 5965 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:43:44.681463 5965 factory.go:656] Stopping watch factory\\\\nI0130 13:43:44.681476 5965 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:43:44.681684 5965 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:08Z\\\",\\\"message\\\":\\\"found while processing openshift-etcd-operator/etcd-operator-b45778765-zrj8g: failed to check if pod openshift-etcd-operator/etcd-operator-b45778765-zrj8g is in primary UDN: could not find OVN pod annotation in map[]\\\\nI0130 13:44:08.535135 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-multus/multus-admission-controller-857f4d67dd-mnzcq: failed to check if pod openshift-multus/multus-admission-controller-857f4d67dd-mnzcq is in primary UDN: could not find OVN pod annotation in map[cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes:hosted-cluster-api-access]\\\\nI0130 13:44:08.535148 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-service-ca/service-ca-9c57cc56f-n9v6k: failed to check if pod openshift-service-ca/service-ca-9c57cc56f-n9v6k is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nE0130 13:44:08.602321 6172 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0130 13:44:08.603514 6172 ovnkube.go:599] Stopped ovnkube\\\\nI0130 13:44:08.603573 6172 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.501565 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.514764 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.527939 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.540192 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.551425 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.563178 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.563212 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.563221 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.563239 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.563252 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.563913 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.573591 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.583635 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.594892 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.604924 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.621760 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.633202 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.665454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.665722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.665848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.665946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.666029 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.768964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.769196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.769267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.769332 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.769416 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.873520 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.873760 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.873771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.873786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.873796 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.976424 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.976686 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.976955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.977038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:10 crc kubenswrapper[4793]: I0130 13:44:10.977166 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:10Z","lastTransitionTime":"2026-01-30T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.080626 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.081114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.081264 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.081380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.081515 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.184980 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.185092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.185104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.185135 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.185146 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.287497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.287828 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.287957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.288122 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.288455 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.338826 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:37:14.813778487 +0000 UTC Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.391164 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.391776 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.391848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.391957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.392039 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.397383 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.397395 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.397421 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:11 crc kubenswrapper[4793]: E0130 13:44:11.397737 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:11 crc kubenswrapper[4793]: E0130 13:44:11.397581 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.397421 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:11 crc kubenswrapper[4793]: E0130 13:44:11.397835 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:11 crc kubenswrapper[4793]: E0130 13:44:11.397936 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.495598 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.495650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.495668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.495692 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.495708 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.599603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.599990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.600263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.600477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.600652 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.704621 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.704693 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.704715 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.704744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.704765 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.809410 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.809880 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.810039 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.810218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.810347 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.913175 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.914199 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.914237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.914260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:11 crc kubenswrapper[4793]: I0130 13:44:11.914275 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:11Z","lastTransitionTime":"2026-01-30T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.016994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.017031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.017064 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.017080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.017095 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.119939 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.119993 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.120017 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.120041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.120150 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.222270 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.222325 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.222336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.222354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.222367 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.325733 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.325786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.325796 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.325812 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.325823 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.339972 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:24:47.465471492 +0000 UTC Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.428511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.428553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.428567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.428605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.428617 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.531358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.531403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.531414 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.531427 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.531439 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.633626 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.633717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.633740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.633771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.633783 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.736303 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.736575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.736657 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.736738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.736802 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.839244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.839327 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.839337 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.839360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.839376 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.942133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.942160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.942168 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.942180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:12 crc kubenswrapper[4793]: I0130 13:44:12.942189 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:12Z","lastTransitionTime":"2026-01-30T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.043929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.043956 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.043964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.044023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.044033 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.147238 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.147274 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.147286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.147302 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.147314 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.249081 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.249114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.249129 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.249176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.249194 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.340319 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 08:38:23.07676149 +0000 UTC Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.351040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.351097 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.351106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.351118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.351127 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.398304 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.398348 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.398369 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:13 crc kubenswrapper[4793]: E0130 13:44:13.398442 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:13 crc kubenswrapper[4793]: E0130 13:44:13.398591 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:13 crc kubenswrapper[4793]: E0130 13:44:13.398677 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.398733 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:13 crc kubenswrapper[4793]: E0130 13:44:13.398796 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.453831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.453883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.453899 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.453919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.453933 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.555828 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.555869 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.555882 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.555899 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.555912 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.659478 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.659509 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.659519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.659532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.659541 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.762472 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.762506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.762514 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.762528 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.762537 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.865845 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.865893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.865904 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.865920 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.865938 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.969387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.969460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.969478 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.969502 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:13 crc kubenswrapper[4793]: I0130 13:44:13.969516 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:13Z","lastTransitionTime":"2026-01-30T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.071915 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.071983 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.071995 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.072010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.072021 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.175177 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.175454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.175544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.175650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.175750 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.278254 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.278295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.278304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.278318 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.278327 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.341307 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 04:49:38.53161922 +0000 UTC Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.381374 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.381452 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.381464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.381488 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.381505 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.484482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.484526 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.484535 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.484585 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.484596 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.586672 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.586727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.586741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.586763 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.586787 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.688889 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.688927 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.688936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.688950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.688959 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.791463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.791520 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.791529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.791544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.791553 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.895136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.895206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.895218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.895232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.895243 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.997497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.997538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.997549 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.997564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:14 crc kubenswrapper[4793]: I0130 13:44:14.997576 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:14Z","lastTransitionTime":"2026-01-30T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.100459 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.100523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.100539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.100564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.100581 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.203532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.203587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.203601 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.203621 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.203636 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.306025 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.306083 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.306092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.306108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.306118 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.341753 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 08:43:01.917877761 +0000 UTC Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.397802 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.397845 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.397853 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.397866 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:15 crc kubenswrapper[4793]: E0130 13:44:15.397960 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:15 crc kubenswrapper[4793]: E0130 13:44:15.398040 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:15 crc kubenswrapper[4793]: E0130 13:44:15.398118 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:15 crc kubenswrapper[4793]: E0130 13:44:15.398189 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.408619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.408667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.408680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.408697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.408710 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.510683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.510730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.510740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.510752 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.510761 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.613141 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.613202 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.613218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.613241 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.613260 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.715281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.715342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.715355 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.715372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.715383 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.818000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.818073 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.818092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.818108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.818124 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.920367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.920393 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.920401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.920413 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:15 crc kubenswrapper[4793]: I0130 13:44:15.920423 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:15Z","lastTransitionTime":"2026-01-30T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.022436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.022469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.022479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.022492 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.022502 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.124103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.124140 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.124152 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.124165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.124174 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.226023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.226074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.226084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.226097 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.226105 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.328093 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.328128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.328139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.328155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.328167 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.342167 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:41:36.723992576 +0000 UTC Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.430478 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.430736 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.430803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.430864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.430936 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.533079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.533532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.533740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.533820 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.533881 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.636487 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.636778 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.636882 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.636955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.637012 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.739738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.739941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.740126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.740194 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.740255 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.842023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.842082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.842094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.842110 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.842122 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.945010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.945068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.945081 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.945098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:16 crc kubenswrapper[4793]: I0130 13:44:16.945112 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:16Z","lastTransitionTime":"2026-01-30T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.047176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.047208 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.047216 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.047228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.047238 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.149950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.150002 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.150017 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.150033 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.150070 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.251991 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.252027 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.252038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.252072 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.252084 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.343107 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 03:59:16.047852147 +0000 UTC Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.354712 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.354949 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.355069 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.355165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.355274 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.398110 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.398183 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.398180 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.398211 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:17 crc kubenswrapper[4793]: E0130 13:44:17.398262 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:17 crc kubenswrapper[4793]: E0130 13:44:17.398306 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:17 crc kubenswrapper[4793]: E0130 13:44:17.398431 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:17 crc kubenswrapper[4793]: E0130 13:44:17.398570 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.458794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.458837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.458848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.458864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.458877 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.560593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.560631 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.560641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.560657 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.560667 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.663351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.663393 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.663404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.663420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.663433 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.765403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.765505 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.765519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.765535 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.765545 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.867275 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.867315 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.867325 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.867340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.867352 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.945296 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:17 crc kubenswrapper[4793]: E0130 13:44:17.945410 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:44:17 crc kubenswrapper[4793]: E0130 13:44:17.945477 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs podName:3401bbdc-090b-402b-bf7b-a4a823182946 nodeName:}" failed. No retries permitted until 2026-01-30 13:44:49.945457312 +0000 UTC m=+100.646805803 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs") pod "network-metrics-daemon-xfcvw" (UID: "3401bbdc-090b-402b-bf7b-a4a823182946") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.969533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.969573 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.969587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.969603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:17 crc kubenswrapper[4793]: I0130 13:44:17.969613 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:17Z","lastTransitionTime":"2026-01-30T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.073621 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.073658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.073669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.073695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.073715 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.176694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.176911 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.176978 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.177037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.177132 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.279032 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.279100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.279112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.279127 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.279137 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.343559 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:36:24.538034173 +0000 UTC Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.381068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.381211 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.381276 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.381363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.381433 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.483972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.484005 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.484015 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.484029 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.484038 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.586184 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.586215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.586224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.586241 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.586251 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.689150 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.689194 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.689206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.689226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.689238 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.792236 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.792287 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.792310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.792343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.792364 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.895483 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.895550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.895572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.895601 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.895622 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.997415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.997444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.997453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.997466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:18 crc kubenswrapper[4793]: I0130 13:44:18.997474 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:18Z","lastTransitionTime":"2026-01-30T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.100356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.100397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.100410 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.100428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.100440 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.202763 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.202800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.202811 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.202838 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.202848 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.304950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.305364 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.305555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.305691 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.305824 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.344682 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:54:25.76134155 +0000 UTC Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.398251 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.398319 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.398423 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:19 crc kubenswrapper[4793]: E0130 13:44:19.398648 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.398441 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.398958 4793 scope.go:117] "RemoveContainer" containerID="ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f" Jan 30 13:44:19 crc kubenswrapper[4793]: E0130 13:44:19.398975 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:19 crc kubenswrapper[4793]: E0130 13:44:19.399149 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:19 crc kubenswrapper[4793]: E0130 13:44:19.399304 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.411914 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.412287 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.412352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.412364 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.412377 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.412386 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.424308 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.435796 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.447520 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.460543 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.473839 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.484209 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.496781 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.506663 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.514163 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.514291 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.514361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.514428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.514493 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.516763 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.527309 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.539803 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.552552 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.564554 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.574692 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.585582 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.608034 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:08Z\\\",\\\"message\\\":\\\"found while processing openshift-etcd-operator/etcd-operator-b45778765-zrj8g: failed to check if pod openshift-etcd-operator/etcd-operator-b45778765-zrj8g is in primary UDN: could not find OVN pod annotation in map[]\\\\nI0130 13:44:08.535135 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-multus/multus-admission-controller-857f4d67dd-mnzcq: failed to check if pod openshift-multus/multus-admission-controller-857f4d67dd-mnzcq is in primary UDN: could not find OVN pod annotation in map[cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes:hosted-cluster-api-access]\\\\nI0130 13:44:08.535148 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-service-ca/service-ca-9c57cc56f-n9v6k: failed to check if pod openshift-service-ca/service-ca-9c57cc56f-n9v6k is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nE0130 13:44:08.602321 6172 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0130 13:44:08.603514 6172 ovnkube.go:599] Stopped ovnkube\\\\nI0130 13:44:08.603573 6172 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.616467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.616504 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.616515 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.616530 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.616541 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.718292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.718328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.718338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.718353 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.718362 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.816333 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/1.log" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.819486 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.820003 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.821359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.821398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.821425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.821439 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.821450 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.836502 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.854906 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.869803 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.890810 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.908399 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.921552 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.923281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.923314 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.923323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.923339 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.923348 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:19Z","lastTransitionTime":"2026-01-30T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.946832 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.962367 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:19 crc kubenswrapper[4793]: I0130 13:44:19.984661 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.043333 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.043363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.043371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.043384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.043394 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.063966 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:08Z\\\",\\\"message\\\":\\\"found while processing openshift-etcd-operator/etcd-operator-b45778765-zrj8g: failed to check if pod openshift-etcd-operator/etcd-operator-b45778765-zrj8g is in primary UDN: could not find OVN pod annotation in map[]\\\\nI0130 13:44:08.535135 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-multus/multus-admission-controller-857f4d67dd-mnzcq: failed to check if pod openshift-multus/multus-admission-controller-857f4d67dd-mnzcq is in primary UDN: could not find OVN pod annotation in map[cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes:hosted-cluster-api-access]\\\\nI0130 13:44:08.535148 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-service-ca/service-ca-9c57cc56f-n9v6k: failed to check if pod openshift-service-ca/service-ca-9c57cc56f-n9v6k is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nE0130 13:44:08.602321 6172 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0130 13:44:08.603514 6172 ovnkube.go:599] Stopped ovnkube\\\\nI0130 13:44:08.603573 6172 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.083421 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.093587 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.107538 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.117940 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.117966 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.117977 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.117990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.118000 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.118024 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: E0130 13:44:20.127458 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.130112 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.130222 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.130251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.130258 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.130269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.130278 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: E0130 13:44:20.141511 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.143322 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.145416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.145444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.145454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.145476 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.145488 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.156638 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: E0130 13:44:20.157930 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.161366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.161386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.161394 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.161406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.161414 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: E0130 13:44:20.172867 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.175345 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.175371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.175379 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.175392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.175400 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: E0130 13:44:20.190243 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: E0130 13:44:20.190396 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.191627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.191655 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.191666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.191681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.191691 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.294263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.294300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.294312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.294329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.294341 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.345277 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:10:38.23478811 +0000 UTC Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.396902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.396941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.396952 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.396969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.396991 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.412790 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.423196 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.433371 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.447710 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.463999 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.475142 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.493283 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.500082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.500101 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.500109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.500121 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.500131 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.505181 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.516963 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.526878 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.539615 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.554391 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.567805 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.579484 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.588925 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.598996 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.602576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.602605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.602616 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.602632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.602643 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.618034 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:08Z\\\",\\\"message\\\":\\\"found while processing openshift-etcd-operator/etcd-operator-b45778765-zrj8g: failed to check if pod openshift-etcd-operator/etcd-operator-b45778765-zrj8g is in primary UDN: could not find OVN pod annotation in map[]\\\\nI0130 13:44:08.535135 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-multus/multus-admission-controller-857f4d67dd-mnzcq: failed to check if pod openshift-multus/multus-admission-controller-857f4d67dd-mnzcq is in primary UDN: could not find OVN pod annotation in map[cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes:hosted-cluster-api-access]\\\\nI0130 13:44:08.535148 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-service-ca/service-ca-9c57cc56f-n9v6k: failed to check if pod openshift-service-ca/service-ca-9c57cc56f-n9v6k is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nE0130 13:44:08.602321 6172 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0130 13:44:08.603514 6172 ovnkube.go:599] Stopped ovnkube\\\\nI0130 13:44:08.603573 6172 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.705215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.705260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.705275 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.705295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.705306 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.808009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.808061 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.808071 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.808086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.808096 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.823982 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/2.log" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.824702 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/1.log" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.826936 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd" exitCode=1 Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.826990 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.827021 4793 scope.go:117] "RemoveContainer" containerID="ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.828008 4793 scope.go:117] "RemoveContainer" containerID="df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd" Jan 30 13:44:20 crc kubenswrapper[4793]: E0130 13:44:20.828274 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.839365 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.849550 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.866716 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf0a56bc7e34cfcbb80b4d74e44d81ea9954da7a8e3665929db4c8aa716769f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:08Z\\\",\\\"message\\\":\\\"found while processing openshift-etcd-operator/etcd-operator-b45778765-zrj8g: failed to check if pod openshift-etcd-operator/etcd-operator-b45778765-zrj8g is in primary UDN: could not find OVN pod annotation in map[]\\\\nI0130 13:44:08.535135 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-multus/multus-admission-controller-857f4d67dd-mnzcq: failed to check if pod openshift-multus/multus-admission-controller-857f4d67dd-mnzcq is in primary UDN: could not find OVN pod annotation in map[cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes:hosted-cluster-api-access]\\\\nI0130 13:44:08.535148 6172 controller.go:257] Controller udn-host-isolation-manager: error found while processing openshift-service-ca/service-ca-9c57cc56f-n9v6k: failed to check if pod openshift-service-ca/service-ca-9c57cc56f-n9v6k is in primary UDN: could not find OVN pod annotation in map[openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default]\\\\nE0130 13:44:08.602321 6172 shared_informer.go:316] \\\\\\\"Unhandled Error\\\\\\\" err=\\\\\\\"unable to sync caches for ovn-lb-controller\\\\\\\" logger=\\\\\\\"UnhandledError\\\\\\\"\\\\nI0130 13:44:08.603514 6172 ovnkube.go:599] Stopped ovnkube\\\\nI0130 13:44:08.603573 6172 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.880565 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.892338 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.904981 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.911738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.911779 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.911789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.911803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.911813 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:20Z","lastTransitionTime":"2026-01-30T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.918194 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.930647 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.945360 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.955099 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.965011 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.979440 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:20 crc kubenswrapper[4793]: I0130 13:44:20.990073 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.002323 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.012938 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.014098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.014148 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.014293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.014311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.014322 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.025848 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.037923 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.117106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.117443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.117553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.117656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.117750 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.222201 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.222559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.222805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.222978 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.223186 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.326109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.326142 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.326152 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.326165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.326178 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.345495 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:50:38.536178898 +0000 UTC Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.397495 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:21 crc kubenswrapper[4793]: E0130 13:44:21.397636 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.397846 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:21 crc kubenswrapper[4793]: E0130 13:44:21.397908 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.398099 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:21 crc kubenswrapper[4793]: E0130 13:44:21.398161 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.398296 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:21 crc kubenswrapper[4793]: E0130 13:44:21.398354 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.428466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.428739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.428824 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.428928 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.429027 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.530846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.530879 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.530888 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.530903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.530915 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.634129 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.634171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.634180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.634195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.634207 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.736616 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.736680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.736689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.736703 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.736714 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.830697 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/2.log" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.833458 4793 scope.go:117] "RemoveContainer" containerID="df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd" Jan 30 13:44:21 crc kubenswrapper[4793]: E0130 13:44:21.833590 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.838227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.838247 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.838255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.838265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.838273 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.846177 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.857357 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.868353 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.876321 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.885604 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.898387 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.910582 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.920898 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.933028 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.940371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.940401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.940412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.940428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.940437 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:21Z","lastTransitionTime":"2026-01-30T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.944014 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.957236 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.972275 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.982529 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:21 crc kubenswrapper[4793]: I0130 13:44:21.998336 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.009848 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.019222 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.028708 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.042467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.042516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.042541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.042560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.042574 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.144645 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.144697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.144709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.144726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.144737 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.246959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.246984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.246992 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.247022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.247032 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.346145 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:26:09.483688963 +0000 UTC Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.348727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.348774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.348784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.348800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.348811 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.451419 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.451451 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.451463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.451476 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.451485 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.554412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.554470 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.554480 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.554495 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.554509 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.656842 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.656871 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.656880 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.656894 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.656902 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.758999 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.759024 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.759035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.759062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.759073 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.837161 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/0.log" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.837191 4793 generic.go:334] "Generic (PLEG): container finished" podID="3e8d16db-eb58-4895-8c24-47d6f12b1ea4" containerID="9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812" exitCode=1 Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.837213 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ssnl" event={"ID":"3e8d16db-eb58-4895-8c24-47d6f12b1ea4","Type":"ContainerDied","Data":"9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.837499 4793 scope.go:117] "RemoveContainer" containerID="9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.851714 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.862515 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.864516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.864541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.864552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.864567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.864578 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.878656 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.889434 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.906879 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.921527 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.931792 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.941515 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.952529 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.965595 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.966562 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.966584 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.966592 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.966605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.966614 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:22Z","lastTransitionTime":"2026-01-30T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.978519 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:22 crc kubenswrapper[4793]: I0130 13:44:22.990791 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.002921 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.013379 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.023995 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.037877 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.053695 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.069151 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.069195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.069204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.069220 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.069229 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.171356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.171384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.171391 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.171403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.171412 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.273454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.273483 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.273491 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.273503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.273512 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.347307 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 04:18:44.306697568 +0000 UTC Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.376064 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.376092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.376100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.376112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.376120 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.397855 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.397910 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:23 crc kubenswrapper[4793]: E0130 13:44:23.397956 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:23 crc kubenswrapper[4793]: E0130 13:44:23.398025 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.398103 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:23 crc kubenswrapper[4793]: E0130 13:44:23.398159 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.398214 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:23 crc kubenswrapper[4793]: E0130 13:44:23.398269 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.478118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.478145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.478153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.478165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.478173 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.580623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.580655 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.580663 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.580676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.580685 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.683659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.683693 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.683703 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.683717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.683726 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.790065 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.790112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.790123 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.790138 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.790149 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.841357 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/0.log" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.841398 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ssnl" event={"ID":"3e8d16db-eb58-4895-8c24-47d6f12b1ea4","Type":"ContainerStarted","Data":"95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.856230 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.870631 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.883293 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.892776 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.892802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.892831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.892845 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.892855 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.893484 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.904845 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.916505 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.929609 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.944027 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.957152 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.968235 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.979512 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.991129 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.994565 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.994591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.994602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.994618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.994628 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:23Z","lastTransitionTime":"2026-01-30T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:23 crc kubenswrapper[4793]: I0130 13:44:23.999373 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.016476 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.028232 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.038550 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.048960 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.096916 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.097158 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.097245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.097372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.097465 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.200260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.200299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.200311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.200326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.200339 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.303653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.303723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.303746 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.303774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.303795 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.347958 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:30:40.200319089 +0000 UTC Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.406996 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.407380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.407569 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.407752 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.407906 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.511162 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.511230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.511241 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.511254 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.511262 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.613992 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.614127 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.614151 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.614177 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.614195 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.717219 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.717278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.717292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.717313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.717326 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.819827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.819873 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.819884 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.819902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.819912 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.922840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.922897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.922906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.922922 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:24 crc kubenswrapper[4793]: I0130 13:44:24.922933 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:24Z","lastTransitionTime":"2026-01-30T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.025489 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.025532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.025545 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.025561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.025573 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.128671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.128705 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.128718 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.128736 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.128748 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.231932 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.231970 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.231981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.231997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.232010 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.334281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.334336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.334346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.334365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.334376 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.348703 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:51:39.846515397 +0000 UTC Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.398194 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.398296 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:25 crc kubenswrapper[4793]: E0130 13:44:25.398497 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.398507 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.398572 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:25 crc kubenswrapper[4793]: E0130 13:44:25.398641 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:25 crc kubenswrapper[4793]: E0130 13:44:25.398821 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:25 crc kubenswrapper[4793]: E0130 13:44:25.398946 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.437497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.437551 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.437568 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.437590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.437608 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.541018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.541107 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.541129 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.541157 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.541180 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.645136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.645174 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.645184 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.645198 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.645208 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.748037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.748083 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.748091 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.748105 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.748115 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.851203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.851256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.851272 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.851292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.851306 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.954424 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.954475 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.954492 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.954514 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:25 crc kubenswrapper[4793]: I0130 13:44:25.954530 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:25Z","lastTransitionTime":"2026-01-30T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.057331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.057398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.057416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.057439 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.057458 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.159767 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.159808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.159817 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.159830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.159839 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.262008 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.262074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.262087 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.262105 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.262116 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.349668 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:34:03.484926221 +0000 UTC Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.364918 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.364956 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.364965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.364982 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.364993 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.467378 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.467429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.467446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.467467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.467484 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.571470 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.571531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.571548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.571571 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.571588 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.674555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.674599 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.674636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.674655 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.674666 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.777728 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.777763 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.777770 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.777784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.777794 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.880334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.880370 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.880380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.880393 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.880404 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.983361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.983396 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.983444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.983460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:26 crc kubenswrapper[4793]: I0130 13:44:26.983470 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:26Z","lastTransitionTime":"2026-01-30T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.086508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.086569 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.086579 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.086594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.086606 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.189949 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.190004 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.190020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.190043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.190100 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.292002 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.292034 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.292079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.292111 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.292130 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.350274 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:32:22.174782172 +0000 UTC Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.394157 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.394203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.394215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.394234 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.394245 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.397411 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.397468 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.397487 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.397432 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:27 crc kubenswrapper[4793]: E0130 13:44:27.397572 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:27 crc kubenswrapper[4793]: E0130 13:44:27.397699 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:27 crc kubenswrapper[4793]: E0130 13:44:27.397875 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:27 crc kubenswrapper[4793]: E0130 13:44:27.397980 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.497475 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.497599 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.497619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.497642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.497660 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.600107 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.600142 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.600150 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.600165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.600174 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.702896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.702938 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.702948 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.702961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.702970 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.805900 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.805957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.805969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.805986 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.805998 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.908942 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.909527 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.909605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.909691 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:27 crc kubenswrapper[4793]: I0130 13:44:27.909770 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:27Z","lastTransitionTime":"2026-01-30T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.012595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.012646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.012662 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.012682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.012694 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.115622 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.115685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.115702 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.115726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.115744 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.218937 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.219035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.219084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.219112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.219129 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.322623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.322666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.322678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.322712 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.322725 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.351122 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:30:29.903666294 +0000 UTC Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.425765 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.425887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.425913 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.425947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.426011 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.534559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.534646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.534665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.534720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.534740 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.638343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.638426 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.638443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.638469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.638488 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.741438 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.741488 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.741504 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.741530 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.741547 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.844872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.844964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.844991 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.845021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.845129 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.948925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.949012 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.949037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.949136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:28 crc kubenswrapper[4793]: I0130 13:44:28.949163 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:28Z","lastTransitionTime":"2026-01-30T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.052114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.052165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.052181 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.052200 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.052217 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.155711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.155787 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.155806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.155830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.155848 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.258804 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.258919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.258988 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.259025 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.259077 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.351927 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:38:36.731906104 +0000 UTC Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.362595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.362649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.362666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.362688 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.362704 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.397499 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.397548 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.397553 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.397525 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:29 crc kubenswrapper[4793]: E0130 13:44:29.397666 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:29 crc kubenswrapper[4793]: E0130 13:44:29.397786 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:29 crc kubenswrapper[4793]: E0130 13:44:29.397840 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:29 crc kubenswrapper[4793]: E0130 13:44:29.397937 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.465481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.465539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.465575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.465604 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.465626 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.569003 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.569125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.569139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.569157 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.569169 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.671539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.671579 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.671590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.671604 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.671614 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.773849 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.773919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.773936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.773959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.773974 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.876717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.876760 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.876803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.876826 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.876837 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.979221 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.979299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.979324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.979354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:29 crc kubenswrapper[4793]: I0130 13:44:29.979377 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:29Z","lastTransitionTime":"2026-01-30T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.082232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.082266 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.082274 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.082287 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.082297 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.185538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.185579 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.185590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.185606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.185617 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.260848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.260922 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.260945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.260972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.260993 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: E0130 13:44:30.280946 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.286827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.286893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.286916 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.286990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.287012 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: E0130 13:44:30.313482 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.319218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.319298 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.319321 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.319355 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.319380 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: E0130 13:44:30.335484 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.342822 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.342908 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.342926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.342947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.342963 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.352483 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 13:33:27.324818466 +0000 UTC Jan 30 13:44:30 crc kubenswrapper[4793]: E0130 13:44:30.361542 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.366531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.366590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.366606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.366890 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.366928 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: E0130 13:44:30.381892 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: E0130 13:44:30.382205 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.383841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.383867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.383875 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.383887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.383896 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.415584 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.426039 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.435507 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.445277 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.456006 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.467014 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.481212 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.489660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.489723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.489736 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.489749 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.489757 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.494013 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.506018 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.524128 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.535331 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.546980 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.557764 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.567428 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.583149 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.592009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.592137 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.592152 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.592168 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.592179 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.593778 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.611069 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.694624 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.694658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.694675 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.694689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.694698 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.797535 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.797823 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.797832 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.797846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.797855 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.900557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.900593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.900603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.900618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:30 crc kubenswrapper[4793]: I0130 13:44:30.900630 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:30Z","lastTransitionTime":"2026-01-30T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.010694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.010740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.010752 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.010767 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.010780 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.113583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.113628 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.113641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.113656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.113668 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.217341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.217429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.217459 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.217489 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.217509 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.320452 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.320519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.320537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.320560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.320577 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.353150 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:28:27.130567791 +0000 UTC Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.397773 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.397814 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.397773 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:31 crc kubenswrapper[4793]: E0130 13:44:31.397887 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:31 crc kubenswrapper[4793]: E0130 13:44:31.397961 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.397998 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:31 crc kubenswrapper[4793]: E0130 13:44:31.398076 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:31 crc kubenswrapper[4793]: E0130 13:44:31.398132 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.422876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.422954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.422968 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.422981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.422990 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.527596 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.527658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.527668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.527682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.527737 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.631187 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.631252 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.631272 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.631301 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.631323 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.733816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.733864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.733876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.733896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.733908 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.836738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.836817 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.836852 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.836879 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.836898 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.939877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.939934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.939951 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.939975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:31 crc kubenswrapper[4793]: I0130 13:44:31.939992 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:31Z","lastTransitionTime":"2026-01-30T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.042879 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.042933 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.042950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.042972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.042989 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.146397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.146472 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.146496 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.146526 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.146547 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.249720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.249876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.249905 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.249936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.249956 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.353380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.353444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.353466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.353499 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.353521 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.354382 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 01:57:05.701883648 +0000 UTC Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.456348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.456420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.456444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.456470 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.456488 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.559225 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.559269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.559280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.559295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.559309 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.661699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.661769 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.661792 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.661823 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.661847 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.765567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.765642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.765666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.765695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.765722 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.868980 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.869079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.869106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.869136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.869159 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.971452 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.971497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.971508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.971523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:32 crc kubenswrapper[4793]: I0130 13:44:32.971537 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:32Z","lastTransitionTime":"2026-01-30T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.074259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.074295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.074309 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.074324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.074335 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.177340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.177380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.177389 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.177402 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.177412 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.279902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.279989 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.280005 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.280034 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.280092 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.354981 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:53:09.776495734 +0000 UTC Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.382811 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.382846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.382857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.382872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.382905 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.397465 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.397553 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.397576 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:33 crc kubenswrapper[4793]: E0130 13:44:33.397650 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:33 crc kubenswrapper[4793]: E0130 13:44:33.397827 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.397973 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:33 crc kubenswrapper[4793]: E0130 13:44:33.398109 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:33 crc kubenswrapper[4793]: E0130 13:44:33.398261 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.485148 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.485197 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.485212 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.485233 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.485250 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.588312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.588367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.588376 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.588392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.588401 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.691283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.691325 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.691359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.691376 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.691387 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.794572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.794617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.794626 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.794642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.794652 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.904897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.905166 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.905247 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.905318 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:33 crc kubenswrapper[4793]: I0130 13:44:33.905374 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:33Z","lastTransitionTime":"2026-01-30T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.009167 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.009206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.009216 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.009232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.009244 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.111954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.112000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.112032 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.112080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.112097 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.214629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.215022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.215229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.215385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.215530 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.319087 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.319451 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.319578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.319700 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.319841 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.356232 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 08:30:54.347948661 +0000 UTC Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.423493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.423553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.423572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.423594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.423611 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.529836 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.530086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.530170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.530285 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.530385 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.633507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.633569 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.633585 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.633609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.633627 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.737230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.737570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.737678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.737780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.737878 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.841504 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.841539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.841547 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.841561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.841572 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.945259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.945341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.945360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.945388 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:34 crc kubenswrapper[4793]: I0130 13:44:34.945407 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:34Z","lastTransitionTime":"2026-01-30T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.049666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.049711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.049720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.049735 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.049744 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.152258 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.152575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.152800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.152991 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.153310 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.256253 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.256312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.256330 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.256356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.256373 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.356910 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 00:25:02.017465021 +0000 UTC Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.358390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.358919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.359176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.359381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.359590 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.397854 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.398173 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.398039 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:35 crc kubenswrapper[4793]: E0130 13:44:35.398196 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.397879 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:35 crc kubenswrapper[4793]: E0130 13:44:35.398806 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:35 crc kubenswrapper[4793]: E0130 13:44:35.398998 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:35 crc kubenswrapper[4793]: E0130 13:44:35.399129 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.399374 4793 scope.go:117] "RemoveContainer" containerID="df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd" Jan 30 13:44:35 crc kubenswrapper[4793]: E0130 13:44:35.399618 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.462460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.462756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.462871 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.462974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.463123 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.565215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.565283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.565293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.565306 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.565315 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.667492 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.667533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.667545 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.667561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.667573 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.770569 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.770636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.770658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.770686 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.770708 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.873389 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.873442 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.873462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.873490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.873512 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.976435 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.976489 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.976510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.976539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:35 crc kubenswrapper[4793]: I0130 13:44:35.976563 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:35Z","lastTransitionTime":"2026-01-30T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.079392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.079471 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.079483 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.079504 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.079518 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.182742 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.182883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.182903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.182942 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.182954 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.285452 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.285487 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.285498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.285538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.285548 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.357810 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:19:42.575093871 +0000 UTC Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.388171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.388206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.388214 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.388228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.388241 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.490368 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.490650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.490741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.490839 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.490913 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.593317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.593390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.593402 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.593417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.593427 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.696242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.696273 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.696289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.696304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.696314 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.798605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.798656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.798666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.798682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.798694 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.901469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.901505 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.901519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.901537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:36 crc kubenswrapper[4793]: I0130 13:44:36.901551 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:36Z","lastTransitionTime":"2026-01-30T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.004662 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.004725 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.004737 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.004754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.004765 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.107130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.107167 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.107177 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.107191 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.107201 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.210868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.210912 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.210924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.210942 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.210955 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.251486 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.251612 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.251675 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.251703 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.251806 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.251854 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:45:41.251840351 +0000 UTC m=+151.953188842 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.252152 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.252173 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.252184 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.252201 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.252219 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:45:41.252209782 +0000 UTC m=+151.953558263 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.252260 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:45:41.252244792 +0000 UTC m=+151.953593293 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.252391 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:41.252383756 +0000 UTC m=+151.953732247 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.313659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.313699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.313708 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.313722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.313732 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.352995 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.353185 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.353200 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.353212 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.353254 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:45:41.353240493 +0000 UTC m=+152.054588994 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.358965 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 09:46:42.761422588 +0000 UTC Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.397218 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.397273 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.397355 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.397356 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.397233 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.397462 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.397521 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:37 crc kubenswrapper[4793]: E0130 13:44:37.397562 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.415346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.415420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.415436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.415456 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.415469 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.517768 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.517818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.517831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.517851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.517863 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.620137 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.620179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.620190 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.620205 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.620216 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.722541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.722585 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.722596 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.722612 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.722623 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.825020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.825070 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.825080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.825092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.825101 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.927278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.927326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.927336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.927348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:37 crc kubenswrapper[4793]: I0130 13:44:37.927356 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:37Z","lastTransitionTime":"2026-01-30T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.029724 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.029753 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.029761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.029774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.029784 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.132573 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.132606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.132615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.132632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.132644 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.235819 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.235974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.236078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.236239 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.236415 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.338289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.338343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.338359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.338382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.338399 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.359280 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 13:02:37.693562917 +0000 UTC Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.441615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.441658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.441671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.441688 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.441700 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.544191 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.544641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.544718 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.544802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.544860 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.646780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.646834 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.646846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.646863 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.646874 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.749299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.749598 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.749667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.749737 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.749806 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.851914 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.852238 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.852320 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.852397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.852462 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.955583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.955637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.955646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.955659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:38 crc kubenswrapper[4793]: I0130 13:44:38.955672 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:38Z","lastTransitionTime":"2026-01-30T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.058027 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.058081 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.058090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.058104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.058113 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.159785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.159832 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.159846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.159861 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.159872 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.262358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.262401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.262409 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.262422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.262430 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.359647 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:49:04.2078353 +0000 UTC Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.364404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.364471 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.364484 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.364503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.364515 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.397980 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.398229 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.398263 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:39 crc kubenswrapper[4793]: E0130 13:44:39.398502 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.398543 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:39 crc kubenswrapper[4793]: E0130 13:44:39.398715 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:39 crc kubenswrapper[4793]: E0130 13:44:39.398695 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:39 crc kubenswrapper[4793]: E0130 13:44:39.398794 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.467344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.467382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.467394 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.467411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.467424 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.569744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.569785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.569796 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.569811 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.569822 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.672674 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.672713 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.672722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.672735 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.672743 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.774619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.774649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.774659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.774675 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.774687 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.877004 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.877255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.877336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.877404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.877465 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.978802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.978838 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.978846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.978858 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:39 crc kubenswrapper[4793]: I0130 13:44:39.978867 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:39Z","lastTransitionTime":"2026-01-30T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.080413 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.080455 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.080466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.080481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.080492 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.183040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.183578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.183685 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.183757 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.183812 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.286612 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.286942 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.287090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.287203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.287326 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.360165 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 09:18:19.004695018 +0000 UTC Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.390259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.390533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.390640 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.390750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.390843 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.420326 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.434822 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.445525 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.460577 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.474334 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.492862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.492899 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.492909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.492925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.492937 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.496309 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.509494 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.522023 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.535085 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.546115 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.558307 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.568771 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.579734 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.591794 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.596328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.596356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.596382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.596395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.596406 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.604840 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.615097 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.626442 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.660983 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.661017 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.661024 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.661037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.661061 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: E0130 13:44:40.672846 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.676453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.676492 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.676501 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.676516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.676527 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: E0130 13:44:40.692884 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.697351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.697397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.697405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.697419 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.697429 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: E0130 13:44:40.709263 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.713185 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.713221 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.713231 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.713245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.713255 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: E0130 13:44:40.727639 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.731088 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.731125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.731139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.731155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.731166 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: E0130 13:44:40.744337 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:40 crc kubenswrapper[4793]: E0130 13:44:40.744452 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.746600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.746631 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.746639 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.746653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.746663 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.848809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.849167 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.849271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.849367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.849448 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.952428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.952460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.952471 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.952490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:40 crc kubenswrapper[4793]: I0130 13:44:40.952507 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:40Z","lastTransitionTime":"2026-01-30T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.055823 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.056152 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.056243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.056319 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.056385 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.159406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.159706 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.159774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.159837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.159933 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.262595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.262626 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.262634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.262649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.262667 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.360772 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:28:43.726225941 +0000 UTC Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.364922 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.364946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.364953 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.364966 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.364975 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.398196 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:41 crc kubenswrapper[4793]: E0130 13:44:41.398313 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.398506 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:41 crc kubenswrapper[4793]: E0130 13:44:41.398569 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.398709 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:41 crc kubenswrapper[4793]: E0130 13:44:41.398781 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.399100 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:41 crc kubenswrapper[4793]: E0130 13:44:41.399263 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.408263 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.467416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.467697 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.467785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.467898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.467997 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.570554 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.570793 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.570872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.570991 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.571084 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.673194 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.673461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.673540 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.673618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.673690 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.776081 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.776406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.776485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.776566 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.776646 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.878654 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.878907 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.878974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.879071 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.879138 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.982096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.982143 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.982155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.982172 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:41 crc kubenswrapper[4793]: I0130 13:44:41.982184 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:41Z","lastTransitionTime":"2026-01-30T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.085857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.085912 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.085925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.085943 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.085955 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.188700 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.188736 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.188745 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.188758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.188767 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.291380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.291924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.292010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.292144 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.292248 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.361356 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:22:11.201034361 +0000 UTC Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.394541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.394585 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.394595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.394609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.394618 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.497544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.497586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.497598 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.497614 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.497626 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.600467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.600783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.600867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.600948 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.601037 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.704412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.704586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.704603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.704626 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.704638 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.806718 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.806756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.806766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.806781 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.806792 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.908960 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.909281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.909366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.909502 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:42 crc kubenswrapper[4793]: I0130 13:44:42.909598 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:42Z","lastTransitionTime":"2026-01-30T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.012400 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.012699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.012774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.012860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.013073 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.115711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.115777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.115814 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.115843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.115863 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.218214 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.218268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.218284 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.218304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.218320 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.321123 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.321160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.321171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.321184 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.321193 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.362226 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 14:27:45.388084496 +0000 UTC Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.397518 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.397556 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.397672 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:43 crc kubenswrapper[4793]: E0130 13:44:43.397757 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.397787 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:43 crc kubenswrapper[4793]: E0130 13:44:43.397972 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:43 crc kubenswrapper[4793]: E0130 13:44:43.397992 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:43 crc kubenswrapper[4793]: E0130 13:44:43.398030 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.423851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.423885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.423895 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.423911 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.423921 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.527072 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.527106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.527115 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.527129 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.527138 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.629671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.629949 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.630063 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.630173 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.630265 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.732168 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.732549 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.732649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.732751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.732849 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.836336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.836384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.836395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.836415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.836426 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.939617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.939656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.939665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.939680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:43 crc kubenswrapper[4793]: I0130 13:44:43.939697 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:43Z","lastTransitionTime":"2026-01-30T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.042390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.042444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.042467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.042487 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.042504 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.145739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.145835 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.145862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.145897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.145920 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.249398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.249693 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.249790 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.249883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.249960 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.352608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.352652 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.352663 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.352681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.352694 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.362597 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 04:39:33.411690165 +0000 UTC Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.455365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.455437 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.455461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.455499 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.455524 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.558720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.558785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.558806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.558837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.558861 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.660951 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.661230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.661311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.661388 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.661474 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.763340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.763395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.763411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.763435 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.763454 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.866541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.866572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.866581 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.866613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.866627 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.968818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.968854 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.968867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.968883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:44 crc kubenswrapper[4793]: I0130 13:44:44.968893 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:44Z","lastTransitionTime":"2026-01-30T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.071618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.071660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.071670 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.071687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.071702 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.175271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.175387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.175408 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.175431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.175447 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.278000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.278062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.278073 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.278093 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.278104 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.362990 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:19:48.929972676 +0000 UTC Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.380841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.381224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.381361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.381497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.381624 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.397970 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.398279 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:45 crc kubenswrapper[4793]: E0130 13:44:45.398493 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.398524 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:45 crc kubenswrapper[4793]: E0130 13:44:45.398734 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:45 crc kubenswrapper[4793]: E0130 13:44:45.398658 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.398576 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:45 crc kubenswrapper[4793]: E0130 13:44:45.399033 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.484771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.485089 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.485196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.485309 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.485415 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.587768 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.587800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.587809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.587821 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.587830 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.692610 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.692644 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.692654 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.692670 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.692681 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.795805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.795863 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.795884 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.795906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.795924 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.899160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.899231 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.899249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.899319 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:45 crc kubenswrapper[4793]: I0130 13:44:45.899349 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:45Z","lastTransitionTime":"2026-01-30T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.001905 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.001948 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.001960 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.001979 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.001996 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.104154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.104195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.104205 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.104220 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.104232 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.207764 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.207798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.207807 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.207819 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.207829 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.310829 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.310939 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.310954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.311301 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.311539 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.363841 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:57:34.270092418 +0000 UTC Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.414014 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.414089 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.414099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.414131 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.414144 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.516845 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.516898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.516910 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.516925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.516936 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.620810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.620852 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.620861 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.620877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.620886 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.723163 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.723203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.723212 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.723228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.723243 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.826188 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.826224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.826242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.826261 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.826274 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.929001 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.929363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.929441 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.929523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:46 crc kubenswrapper[4793]: I0130 13:44:46.929624 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:46Z","lastTransitionTime":"2026-01-30T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.032297 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.032559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.032809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.032903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.032996 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.135896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.135965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.135985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.136010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.136028 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.238732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.238777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.238785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.238800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.238810 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.340934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.341173 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.341267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.341342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.341416 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.364793 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:33:52.893259482 +0000 UTC Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.397334 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.397395 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.397407 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.397334 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:47 crc kubenswrapper[4793]: E0130 13:44:47.397551 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:47 crc kubenswrapper[4793]: E0130 13:44:47.397462 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:47 crc kubenswrapper[4793]: E0130 13:44:47.397685 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:47 crc kubenswrapper[4793]: E0130 13:44:47.397777 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.409971 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.445712 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.446032 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.446230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.446431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.446641 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.549351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.549629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.549734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.549846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.549949 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.652524 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.652567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.652577 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.652593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.652608 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.755126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.755169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.755178 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.755213 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.755223 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.858269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.858321 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.858332 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.858352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.858371 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.960885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.960958 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.960981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.961009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:47 crc kubenswrapper[4793]: I0130 13:44:47.961031 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:47Z","lastTransitionTime":"2026-01-30T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.063975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.064023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.064038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.064112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.064142 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.167030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.167109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.167121 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.167139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.167151 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.270595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.270634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.270643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.270655 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.270664 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.366097 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 22:14:55.724393446 +0000 UTC Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.373074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.373236 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.373323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.373415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.373516 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.475462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.475734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.475806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.475893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.475999 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.578934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.579031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.579097 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.579130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.579152 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.681664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.681999 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.682380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.682699 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.683043 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.787141 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.787235 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.787257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.787286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.787309 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.889990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.890264 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.890329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.890398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.890477 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.993232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.993278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.993290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.993305 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:48 crc kubenswrapper[4793]: I0130 13:44:48.993318 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:48Z","lastTransitionTime":"2026-01-30T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.096163 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.096212 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.096224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.096256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.096269 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.200038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.200538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.200650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.200745 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.200824 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.303730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.303987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.304105 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.304198 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.304269 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.367243 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 01:01:57.386903407 +0000 UTC Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.397180 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:49 crc kubenswrapper[4793]: E0130 13:44:49.397310 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.397467 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:49 crc kubenswrapper[4793]: E0130 13:44:49.397527 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.397628 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:49 crc kubenswrapper[4793]: E0130 13:44:49.397668 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.397761 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:49 crc kubenswrapper[4793]: E0130 13:44:49.397817 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.407139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.407514 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.407897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.408244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.408551 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.511524 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.511557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.511573 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.511588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.511598 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.613748 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.613802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.613812 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.613824 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.613833 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.716525 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.716857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.717015 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.717223 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.717370 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.819950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.820030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.820077 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.820101 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.820118 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.922340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.922381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.922390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.922406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.922416 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:49Z","lastTransitionTime":"2026-01-30T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:49 crc kubenswrapper[4793]: I0130 13:44:49.978818 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:49 crc kubenswrapper[4793]: E0130 13:44:49.979041 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:44:49 crc kubenswrapper[4793]: E0130 13:44:49.979149 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs podName:3401bbdc-090b-402b-bf7b-a4a823182946 nodeName:}" failed. No retries permitted until 2026-01-30 13:45:53.979130324 +0000 UTC m=+164.680478825 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs") pod "network-metrics-daemon-xfcvw" (UID: "3401bbdc-090b-402b-bf7b-a4a823182946") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.024744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.024816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.024840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.024868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.024894 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.128845 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.128931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.128944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.128958 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.128967 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.232324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.232377 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.232396 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.232419 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.232435 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.334976 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.335037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.335068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.335090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.335102 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.367400 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 07:02:26.89624168 +0000 UTC Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.398846 4793 scope.go:117] "RemoveContainer" containerID="df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.420245 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.432547 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.437633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.437667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.437677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.437694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.437703 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.459632 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.477314 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.488494 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.503239 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.515425 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.526168 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.538584 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.539758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.539993 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.540004 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.540018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.540029 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.551393 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.564799 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.581773 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.593793 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78dd92c-34bb-4606-952d-7d1323e4ecd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138ad071341d45922e6b30ca8d58f26e60c6ab9f407f70fd3b7a61bd7cef446d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.607385 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.619154 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.631610 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.642273 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.642311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.642323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.642341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.642352 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.645937 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.661801 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71f93fe1-7dd7-4557-91d9-63e829052686\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31716686e8eff95a71aca86f4d29b9f0a7e5aed74428b1bceb266273a571fa3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cbec632a964cfe1b95a67579e0f8be3bffe1af19e50940cca4f04b1397d8fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a315e5a682045e2d27391e25293e5427a27df424debb83fc338515a48ef4ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927e5087e2d7755f5eda8cac47915d186b89d2be6b19dac4c5246e1b14f5df13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6dcda3f2706461a36af85ad53e425262bfc3c0ecc47d37b8cb69d908830645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.671271 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.746103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.746159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.746178 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.746204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.746221 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.848861 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.848903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.848917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.848935 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.848958 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.923511 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/2.log" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.926123 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.926669 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.938671 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.952100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.952249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.952319 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.952533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.952634 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:50Z","lastTransitionTime":"2026-01-30T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.956090 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.968281 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.981948 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:50 crc kubenswrapper[4793]: I0130 13:44:50.994003 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.003728 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.004344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.004471 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.004575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.004665 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.012667 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.019232 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.023986 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.024026 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.024038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.024070 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.024083 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.030925 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.042233 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.046547 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.046636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.046669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.046701 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.046725 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.047907 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78dd92c-34bb-4606-952d-7d1323e4ecd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138ad071341d45922e6b30ca8d58f26e60c6ab9f407f70fd3b7a61bd7cef446d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.058227 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.062994 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.066480 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.066547 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.066569 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.066960 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.067690 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.071733 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.083827 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.091932 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.095537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.095586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.095602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.095625 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.095639 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.103140 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.109412 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.109740 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.111321 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.111364 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.111378 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.111398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.111415 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.122328 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71f93fe1-7dd7-4557-91d9-63e829052686\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31716686e8eff95a71aca86f4d29b9f0a7e5aed74428b1bceb266273a571fa3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cbec632a964cfe1b95a67579e0f8be3bffe1af19e50940cca4f04b1397d8fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a315e5a682045e2d27391e25293e5427a27df424debb83fc338515a48ef4ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927e5087e2d7755f5eda8cac47915d186b89d2be6b19dac4c5246e1b14f5df13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6dcda3f2706461a36af85ad53e425262bfc3c0ecc47d37b8cb69d908830645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.139853 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.151723 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.170628 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.197904 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.212591 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.213946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.213981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.213991 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.214006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.214032 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.228547 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.317529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.317609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.317619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.317641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.317654 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.368878 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:06:49.616275053 +0000 UTC Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.398478 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.398773 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.398681 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.399018 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.398699 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.399248 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.398645 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.399458 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.419874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.420265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.420546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.420720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.420854 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.522694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.522926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.522992 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.523091 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.523171 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.625638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.625674 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.625682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.625696 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.625705 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.728958 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.729016 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.729033 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.729084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.729105 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.832213 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.832555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.832651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.832758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.832874 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.931376 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/3.log" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.931902 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/2.log" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.934214 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.934248 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.934257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.934271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.934280 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:51Z","lastTransitionTime":"2026-01-30T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.935228 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" exitCode=1 Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.935263 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.935296 4793 scope.go:117] "RemoveContainer" containerID="df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.935878 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:44:51 crc kubenswrapper[4793]: E0130 13:44:51.936036 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.956953 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.980462 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7ffbd9a7cacd109ec08a8ac924ef0d90f2ca0ef6c4a18f6eb5b37d8715fcdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:20Z\\\",\\\"message\\\":\\\"ler-crc\\\\nI0130 13:44:20.449885 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xfcvw\\\\nI0130 13:44:20.449926 6597 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:44:20.450058 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}\\\\nI0130 13:44:20.450131 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0130 13:44:20.450169 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0130 13:44:20.450253 6597 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storag\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"tor-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-rdsch openshift-multus/multus-additional-cni-plugins-nsxfs openshift-multus/network-metrics-daemon-xfcvw openshift-network-node-identity/network-node-identity-vrzqb]\\\\nI0130 13:44:51.565428 6932 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0130 13:44:51.565439 6932 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565447 6932 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565453 6932 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0130 13:44:51.565457 6932 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0130 13:44:51.565461 6932 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565475 6932 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:44:51.565545 6932 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:51 crc kubenswrapper[4793]: I0130 13:44:51.994311 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.005376 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.015367 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.027005 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.043209 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.043464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.043549 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.043634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.043711 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.050296 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.069741 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.088828 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.102647 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.118387 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.128455 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78dd92c-34bb-4606-952d-7d1323e4ecd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138ad071341d45922e6b30ca8d58f26e60c6ab9f407f70fd3b7a61bd7cef446d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.140437 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.146381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.146406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.146416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.146429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.146440 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.152184 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.161899 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.172933 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.190656 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71f93fe1-7dd7-4557-91d9-63e829052686\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31716686e8eff95a71aca86f4d29b9f0a7e5aed74428b1bceb266273a571fa3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cbec632a964cfe1b95a67579e0f8be3bffe1af19e50940cca4f04b1397d8fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a315e5a682045e2d27391e25293e5427a27df424debb83fc338515a48ef4ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927e5087e2d7755f5eda8cac47915d186b89d2be6b19dac4c5246e1b14f5df13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6dcda3f2706461a36af85ad53e425262bfc3c0ecc47d37b8cb69d908830645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.202713 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.218192 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.249193 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.249236 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.249246 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.249261 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.249271 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.352345 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.352402 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.352411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.352665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.352685 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.370674 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 16:13:43.467587815 +0000 UTC Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.455324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.455360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.455370 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.455387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.455398 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.558533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.558587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.558605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.558629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.558647 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.660749 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.661098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.661262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.661393 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.661531 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.763704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.763743 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.763759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.763775 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.763788 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.866414 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.866456 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.866466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.866478 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.866488 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.941317 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/3.log" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.944721 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:44:52 crc kubenswrapper[4793]: E0130 13:44:52.944859 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.959987 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.969189 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.969241 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.969249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.969263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.969273 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:52Z","lastTransitionTime":"2026-01-30T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.972286 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:52 crc kubenswrapper[4793]: I0130 13:44:52.984130 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.006018 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71f93fe1-7dd7-4557-91d9-63e829052686\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31716686e8eff95a71aca86f4d29b9f0a7e5aed74428b1bceb266273a571fa3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cbec632a964cfe1b95a67579e0f8be3bffe1af19e50940cca4f04b1397d8fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a315e5a682045e2d27391e25293e5427a27df424debb83fc338515a48ef4ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927e5087e2d7755f5eda8cac47915d186b89d2be6b19dac4c5246e1b14f5df13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6dcda3f2706461a36af85ad53e425262bfc3c0ecc47d37b8cb69d908830645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.017469 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.030000 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.049174 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"tor-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-rdsch openshift-multus/multus-additional-cni-plugins-nsxfs openshift-multus/network-metrics-daemon-xfcvw openshift-network-node-identity/network-node-identity-vrzqb]\\\\nI0130 13:44:51.565428 6932 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0130 13:44:51.565439 6932 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565447 6932 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565453 6932 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0130 13:44:51.565457 6932 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0130 13:44:51.565461 6932 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565475 6932 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:44:51.565545 6932 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.061832 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.071751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.071796 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.071809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.071822 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.071830 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.074935 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.084472 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.098074 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.115466 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.128334 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.141563 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.152759 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.164807 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.174246 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.174320 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.174332 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.174347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.174359 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.175559 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78dd92c-34bb-4606-952d-7d1323e4ecd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138ad071341d45922e6b30ca8d58f26e60c6ab9f407f70fd3b7a61bd7cef446d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.188214 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.207707 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.276868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.276910 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.276919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.276933 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.276943 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.371651 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:01:46.117660695 +0000 UTC Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.379714 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.379764 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.379776 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.379793 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.379805 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.398144 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.398192 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:53 crc kubenswrapper[4793]: E0130 13:44:53.398301 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.398510 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.398620 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:53 crc kubenswrapper[4793]: E0130 13:44:53.398758 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:53 crc kubenswrapper[4793]: E0130 13:44:53.398812 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:53 crc kubenswrapper[4793]: E0130 13:44:53.398869 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.482712 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.482743 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.482751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.482764 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.482773 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.586281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.586316 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.586328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.586343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.586356 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.688726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.688760 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.688769 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.688783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.688792 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.792022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.792445 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.792530 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.792617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.792711 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.895116 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.895380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.895464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.895553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.895633 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.997687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.997718 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.997727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.997743 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:53 crc kubenswrapper[4793]: I0130 13:44:53.997755 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:53Z","lastTransitionTime":"2026-01-30T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.099508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.099836 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.099932 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.100020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.100141 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.203143 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.203422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.203507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.203646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.203742 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.306128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.306223 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.306259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.306289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.306310 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.372743 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:52:28.596418077 +0000 UTC Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.409206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.409540 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.409748 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.409950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.410173 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.513753 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.513817 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.513839 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.513867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.513888 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.616660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.617036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.617216 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.617382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.617523 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.723107 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.723169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.723177 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.723208 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.723222 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.826589 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.826622 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.826629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.826642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.826653 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.929295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.929340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.929352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.929368 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:54 crc kubenswrapper[4793]: I0130 13:44:54.929380 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:54Z","lastTransitionTime":"2026-01-30T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.031936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.031993 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.032013 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.032037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.032113 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.135485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.135541 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.135552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.135566 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.135576 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.238870 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.238924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.238939 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.238962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.238976 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.341769 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.341827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.341836 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.341851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.341864 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.373246 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:04:13.232956527 +0000 UTC Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.397583 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.397641 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.397654 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:55 crc kubenswrapper[4793]: E0130 13:44:55.397761 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.397833 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:55 crc kubenswrapper[4793]: E0130 13:44:55.397945 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:55 crc kubenswrapper[4793]: E0130 13:44:55.398009 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:55 crc kubenswrapper[4793]: E0130 13:44:55.398028 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.444587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.444788 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.444813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.444836 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.444850 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.547805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.548079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.548228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.548346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.548528 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.651931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.652021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.652043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.652108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.652129 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.755028 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.755289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.755318 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.755347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.755371 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.858265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.858508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.858620 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.858695 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.858782 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.960495 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.960533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.960544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.960593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:55 crc kubenswrapper[4793]: I0130 13:44:55.960607 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:55Z","lastTransitionTime":"2026-01-30T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.063442 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.063484 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.063493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.063509 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.063519 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.166448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.166499 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.166511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.166526 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.166536 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.268847 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.268893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.268903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.268919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.268930 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.372169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.372251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.372271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.372294 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.372312 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.374156 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 22:59:08.193932394 +0000 UTC Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.475634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.475686 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.475703 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.475725 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.475740 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.577850 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.577912 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.577932 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.577959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.577980 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.680322 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.680384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.680406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.680443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.680465 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.783136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.783180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.783196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.783218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.783234 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.886398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.886516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.886534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.886594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.886617 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.989511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.989570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.989586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.989608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:56 crc kubenswrapper[4793]: I0130 13:44:56.989625 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:56Z","lastTransitionTime":"2026-01-30T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.092522 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.092634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.092658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.092689 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.092707 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.196217 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.196245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.196254 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.196267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.196277 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.298845 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.298885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.298894 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.298909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.298919 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.374297 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:02:30.644904972 +0000 UTC Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.397211 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.397273 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:57 crc kubenswrapper[4793]: E0130 13:44:57.397323 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:57 crc kubenswrapper[4793]: E0130 13:44:57.397441 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.397534 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:57 crc kubenswrapper[4793]: E0130 13:44:57.397652 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.397871 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:57 crc kubenswrapper[4793]: E0130 13:44:57.397976 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.400895 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.401007 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.401137 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.401226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.401309 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.503797 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.504202 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.504377 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.504603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.504794 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.608859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.609359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.609583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.609756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.609907 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.713066 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.713107 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.713116 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.713132 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.713141 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.815985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.816039 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.816072 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.816103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.816115 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.918588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.918616 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.918641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.918656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:57 crc kubenswrapper[4793]: I0130 13:44:57.918663 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:57Z","lastTransitionTime":"2026-01-30T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.020771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.020813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.020827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.020841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.020851 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.122754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.122822 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.122844 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.122864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.122879 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.225917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.225974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.225995 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.226025 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.226076 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.329023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.329126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.329138 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.329154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.329190 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.374735 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:16:41.391506018 +0000 UTC Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.431298 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.431338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.431348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.431366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.431377 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.534370 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.534418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.534428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.534449 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.534461 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.637179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.637469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.637558 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.637658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.637757 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.740361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.740407 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.740417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.740432 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.740442 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.843808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.843862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.843873 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.843891 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.843902 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.946016 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.946062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.946070 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.946083 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:58 crc kubenswrapper[4793]: I0130 13:44:58.946107 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:58Z","lastTransitionTime":"2026-01-30T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.048997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.049080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.049096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.049124 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.049141 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.152683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.152975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.153065 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.153150 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.153251 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.256162 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.256481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.256556 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.256637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.256712 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.359386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.359429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.359440 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.359461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.359471 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.375880 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:52:41.551327016 +0000 UTC Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.397694 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:44:59 crc kubenswrapper[4793]: E0130 13:44:59.398116 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.397782 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:44:59 crc kubenswrapper[4793]: E0130 13:44:59.398570 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.397725 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:44:59 crc kubenswrapper[4793]: E0130 13:44:59.399307 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.397845 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:44:59 crc kubenswrapper[4793]: E0130 13:44:59.399649 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.462224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.462311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.462348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.462379 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.462401 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.564683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.564969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.565265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.565555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.565882 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.669745 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.669818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.669840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.669867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.669889 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.772464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.772520 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.772529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.772543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.772551 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.875463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.875519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.875533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.875552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.875564 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.978525 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.978595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.978619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.978652 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:44:59 crc kubenswrapper[4793]: I0130 13:44:59.978675 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:44:59Z","lastTransitionTime":"2026-01-30T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.081988 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.082117 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.082145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.082176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.082241 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.185250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.185577 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.185661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.185740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.185812 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.289618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.289665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.289680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.289702 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.289717 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.376335 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:39:27.305695908 +0000 UTC Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.392245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.392549 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.392664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.392758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.392844 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.415202 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dad744-dcef-4c9e-88b1-3d8d935794a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ceacdc5ca2463489e07e41571a1f31f77516c24030928f699b9842a2c024bef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b50f690ab4ab427c8afaf9cec2b6d4637731fc7b9874b8fab3ab731da1b5e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d758b4ac3516095ffb1265897d6dadc94b1f1ee3e4d9f13091934f1320629c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a348df9f13c403472aa0ce6541e732306958df80902dbd53a67aa2f5e8e68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31af43c16075b708c5a95fc7813b307a6fc3dc0273cd23d0b8993128bb1bda13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d72d99f766a1d951d0d7c83baf895bd2d3153c85998411befd9e376ad1e76e97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3891abc321e62fc6ae3442029da7c088d7f8c23a801006687c7304ff5a259866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjsp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nsxfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.439233 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71f93fe1-7dd7-4557-91d9-63e829052686\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31716686e8eff95a71aca86f4d29b9f0a7e5aed74428b1bceb266273a571fa3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cbec632a964cfe1b95a67579e0f8be3bffe1af19e50940cca4f04b1397d8fdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a315e5a682045e2d27391e25293e5427a27df424debb83fc338515a48ef4ada4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927e5087e2d7755f5eda8cac47915d186b89d2be6b19dac4c5246e1b14f5df13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6dcda3f2706461a36af85ad53e425262bfc3c0ecc47d37b8cb69d908830645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2c3446aef8e1b23d8edf40ba4ea0b7d92969c56078c852e4afdf2ce7da5f7ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd3a5e37bfe7b1a89e2b9324a3c90c7cba1842bf41c66d4d976e06035104506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3cb94829b29fedecfb6888e8cb8c11bda5db87d49aad7ca20ec4791e147bb4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.453644 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027ffa36b0a23e9ef8fb70fc4cadf3be45148affb9a25d68d6d2c367be5a573a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.465191 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mbqcp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a60502c-d692-40e5-bbb7-d07aaaf80f10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7a714bfed4da372a0ad6514dddbdde73636f43e67f3d4d623ac9ecf11935896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpthl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mbqcp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.477648 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f59a12e8-194c-4874-a9ef-2fc58c18fbbe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d70efeac9487cbda5d0e019be5ef13b61521da5f1e54388314d4e11a1370938\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2f6pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rdsch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.494798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.494841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.494850 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.494863 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.494880 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.504998 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:51Z\\\",\\\"message\\\":\\\"tor-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-rdsch openshift-multus/multus-additional-cni-plugins-nsxfs openshift-multus/network-metrics-daemon-xfcvw openshift-network-node-identity/network-node-identity-vrzqb]\\\\nI0130 13:44:51.565428 6932 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0130 13:44:51.565439 6932 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565447 6932 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565453 6932 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0130 13:44:51.565457 6932 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0130 13:44:51.565461 6932 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0130 13:44:51.565475 6932 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:44:51.565545 6932 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:44:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8km7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-g62p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.517782 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"421ca100-bd7d-4a7b-9587-a77b5b928c5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:43:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 13:43:31.879541 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:43:31.879698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:43:31.880323 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2829718479/tls.crt::/tmp/serving-cert-2829718479/tls.key\\\\\\\"\\\\nI0130 13:43:32.257936 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:43:32.261961 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:43:32.261993 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:43:32.262034 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:43:32.262055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:43:32.271092 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:43:32.271113 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:43:32.271123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:43:32.271125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:43:32.271130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:43:32.271133 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:43:32.271298 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:43:32.272604 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.527324 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663819d2e4f5f52c32794a2a6c62508bee5a5b71f94b20f2426ecabd7fa32fab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.537684 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"565b40c5-b643-47b5-97b3-49d7772fbdd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e975016768752ffd93aef0a96c5e947f697b2a789278f0eda6854328859512ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://294922dd786c9a15ed79679cfce28acb8d2f546f26f67a22d9b080028316b2c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce3d9d0638bd7294b46f855e978384590eb61181f19c39e550c1c5117621294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.549342 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fae93688678b70ddc20adb8a8f34b349f37227572f5f028ef395d98d9d3f603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.562065 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.574027 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d16db-eb58-4895-8c24-47d6f12b1ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:44:22Z\\\",\\\"message\\\":\\\"2026-01-30T13:43:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52\\\\n2026-01-30T13:43:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae048a2e-e589-401a-ba60-b3f69c45ef52 to /host/opt/cni/bin/\\\\n2026-01-30T13:43:37Z [verbose] multus-daemon started\\\\n2026-01-30T13:43:37Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:44:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.584298 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pxcll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34045014-77ce-47a5-9a21-a69d9f8cab72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://087ef25cc991641ab9a21a59ac3bdd356ebcbbbcc79ae27055b60d0b3b240c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g5hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pxcll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.596415 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3401bbdc-090b-402b-bf7b-a4a823182946\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl5wx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xfcvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.597258 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.597321 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.597335 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.597354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.597365 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.610016 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff93550-68ce-4c33-8ec1-5724392d8f30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618d8c3d888bc5e8b3a7ddb7e1f1a5e5b9efd56d3fa7930c8e181ffc49b935f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1079c88b1f3606f0642700ea714ed0d2f1bed48110e1663e9e823a028d2eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfadef9f08672edbc826e233602a34ccea6bdae8efbed091c4d9183eec852044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578a9dac9e5cc87bfc17109a176f6d39b3581250c8c7d8181f6b96782b9dc7e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.622226 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78dd92c-34bb-4606-952d-7d1323e4ecd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138ad071341d45922e6b30ca8d58f26e60c6ab9f407f70fd3b7a61bd7cef446d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2bd9d8a8839d7d918e2bed6ac839257768abeaf6c741392909eb1a73ed8b9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:43:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:43:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.635901 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37d5d2ac-8c00-4221-8af9-ed9e5bea8a01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d763754f5da6f89b48358fa56118e56fa64f3814ce60c03a413032d143dfe699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dfb33bb2c8ff3dd046f4185a5c849462bde90cc5a1bcee9bf958253098b68f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:43:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5lsdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:43:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hb9pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.648608 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.662903 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.700272 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.700336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.700354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.700376 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.700391 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.803118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.803160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.803177 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.803194 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.803206 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.905484 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.905517 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.905528 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.905542 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:00 crc kubenswrapper[4793]: I0130 13:45:00.905551 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:00Z","lastTransitionTime":"2026-01-30T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.007939 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.007973 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.007981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.007997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.008007 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.110114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.110175 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.110186 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.110204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.110214 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.213914 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.214366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.214990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.215120 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.215255 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.227990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.228241 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.228449 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.228777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.228933 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.241339 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.244469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.244598 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.244694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.244784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.244895 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.257293 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.261323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.261691 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.261816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.261935 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.262030 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.275948 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.280152 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.280361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.280460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.280584 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.280670 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.293363 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.296896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.296931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.296941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.296956 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.296967 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.310117 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:45:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"605f6c1b-97a6-4742-afaf-97317a89f932\\\",\\\"systemUUID\\\":\\\"3688a16a-f9da-4911-94b1-610f1963c9db\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.310259 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.318221 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.318246 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.318255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.318268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.318277 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.377256 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:12:22.463492661 +0000 UTC Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.397472 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.397613 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.397802 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.397861 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.397981 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.398032 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.398153 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:01 crc kubenswrapper[4793]: E0130 13:45:01.398199 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.421251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.421295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.421319 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.421340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.421355 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.533004 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.533068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.533079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.533099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.533109 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.635914 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.635972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.635983 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.635996 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.636007 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.737954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.737994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.738006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.738019 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.738028 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.840461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.840494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.840504 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.840518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.840528 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.942412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.942467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.942482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.942502 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:01 crc kubenswrapper[4793]: I0130 13:45:01.942516 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:01Z","lastTransitionTime":"2026-01-30T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.046519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.046567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.046586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.046610 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.046628 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.148738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.149029 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.149155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.149230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.149307 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.251330 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.251363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.251374 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.251387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.251397 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.353959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.354028 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.354075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.354098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.354109 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.377947 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:41:18.854577032 +0000 UTC Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.456507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.456545 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.456557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.456574 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.456587 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.559192 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.559222 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.559230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.559243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.559252 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.661553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.661615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.661632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.661654 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.661669 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.763934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.764024 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.764304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.764344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.764366 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.867090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.867135 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.867195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.867214 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.867225 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.969693 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.969723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.969731 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.969754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:02 crc kubenswrapper[4793]: I0130 13:45:02.969763 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:02Z","lastTransitionTime":"2026-01-30T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.071950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.072318 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.072405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.072506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.072604 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.174667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.174707 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.174717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.174732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.174742 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.277526 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.277954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.278085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.278183 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.278244 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.378737 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 22:14:34.644429297 +0000 UTC Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.380674 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.380708 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.380716 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.380730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.380739 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.397902 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.397932 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:03 crc kubenswrapper[4793]: E0130 13:45:03.398342 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.397985 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.397965 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:03 crc kubenswrapper[4793]: E0130 13:45:03.398418 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:03 crc kubenswrapper[4793]: E0130 13:45:03.398275 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:03 crc kubenswrapper[4793]: E0130 13:45:03.398520 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.483874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.483922 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.483936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.483952 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.483962 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.586032 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.586726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.586770 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.586800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.586819 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.689576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.689620 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.689629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.689645 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.689655 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.791721 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.791776 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.791788 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.791805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.791818 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.894482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.894543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.894558 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.894578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.894591 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.997563 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.997608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.997618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.997634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:03 crc kubenswrapper[4793]: I0130 13:45:03.997644 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:03Z","lastTransitionTime":"2026-01-30T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.100125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.100170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.100181 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.100196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.100210 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.202470 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.202507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.202516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.202529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.202542 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.305191 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.305260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.305278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.305301 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.305319 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.379576 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 11:41:10.436905223 +0000 UTC Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.409999 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.410105 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.410118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.410134 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.410148 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.514037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.514099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.514112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.514128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.514138 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.616801 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.616883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.616906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.617002 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.617121 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.719910 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.719974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.720013 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.720030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.720041 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.822818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.822860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.822869 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.822884 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.822894 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.926638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.926707 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.926719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.926739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:04 crc kubenswrapper[4793]: I0130 13:45:04.926753 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:04Z","lastTransitionTime":"2026-01-30T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.029892 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.029957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.029971 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.029991 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.030007 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.138348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.138392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.138401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.138417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.138426 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.240634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.240686 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.240696 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.240710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.240719 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.342665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.342867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.342937 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.343036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.343141 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.380117 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:52:07.119457429 +0000 UTC Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.397453 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.397467 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.397492 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:05 crc kubenswrapper[4793]: E0130 13:45:05.398027 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:05 crc kubenswrapper[4793]: E0130 13:45:05.397682 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:05 crc kubenswrapper[4793]: E0130 13:45:05.398115 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.397524 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:05 crc kubenswrapper[4793]: E0130 13:45:05.398202 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.445278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.445322 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.445341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.445360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.445371 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.547559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.547603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.547614 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.547628 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.547638 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.650278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.650318 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.650326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.650340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.650350 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.752843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.752894 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.752906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.752924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.752939 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.856139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.856222 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.856244 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.856271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.856295 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.959408 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.959463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.959475 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.959496 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:05 crc kubenswrapper[4793]: I0130 13:45:05.959510 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:05Z","lastTransitionTime":"2026-01-30T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.062162 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.062239 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.062262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.062292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.062316 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.165264 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.165315 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.165325 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.165342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.165351 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.268206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.268472 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.268548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.268619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.268688 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.371120 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.371171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.371183 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.371201 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.371216 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.381225 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 23:29:33.63224149 +0000 UTC Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.473944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.473992 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.474007 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.474022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.474033 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.576176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.576209 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.576240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.576255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.576266 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.679220 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.679280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.679292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.679310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.679323 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.781254 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.781306 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.781317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.781335 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.781347 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.883902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.883946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.883957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.883973 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.883986 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.987227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.987288 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.987305 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.987328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:06 crc kubenswrapper[4793]: I0130 13:45:06.987346 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:06Z","lastTransitionTime":"2026-01-30T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.090229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.090267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.090276 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.090290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.090300 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.192258 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.192302 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.192313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.192331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.192340 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.298180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.298231 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.298248 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.298266 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.298282 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.381596 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:05:05.811136369 +0000 UTC Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.398359 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.398498 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:07 crc kubenswrapper[4793]: E0130 13:45:07.398602 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:07 crc kubenswrapper[4793]: E0130 13:45:07.398772 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.398404 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.400204 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:07 crc kubenswrapper[4793]: E0130 13:45:07.400317 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:07 crc kubenswrapper[4793]: E0130 13:45:07.400392 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.400741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.400786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.400799 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.400815 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.400827 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.503408 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.503642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.503712 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.503810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.503927 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.606511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.606552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.606564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.606579 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.606591 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.708737 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.708772 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.708781 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.708795 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.708807 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.811403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.811694 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.811761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.811835 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.811904 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.914714 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.914766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.914779 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.914796 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:07 crc kubenswrapper[4793]: I0130 13:45:07.914805 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:07Z","lastTransitionTime":"2026-01-30T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.016690 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.016732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.016744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.016759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.016770 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.119475 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.119560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.119588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.119618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.119635 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.221924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.221964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.221975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.221991 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.222002 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.324213 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.324529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.324652 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.324750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.324836 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.382497 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 02:09:59.891365044 +0000 UTC Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.398760 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:45:08 crc kubenswrapper[4793]: E0130 13:45:08.398941 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.428172 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.428739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.428748 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.428763 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.428772 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.530562 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.531079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.531190 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.531286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.531361 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.634327 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.634373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.634381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.634397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.634409 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.736807 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.736863 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.736874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.736887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.736897 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.839096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.839132 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.839141 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.839154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.839164 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.942080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.942133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.942172 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.942188 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.942197 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:08Z","lastTransitionTime":"2026-01-30T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.997829 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/1.log" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.998255 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/0.log" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.998298 4793 generic.go:334] "Generic (PLEG): container finished" podID="3e8d16db-eb58-4895-8c24-47d6f12b1ea4" containerID="95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d" exitCode=1 Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.998328 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ssnl" event={"ID":"3e8d16db-eb58-4895-8c24-47d6f12b1ea4","Type":"ContainerDied","Data":"95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d"} Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.998361 4793 scope.go:117] "RemoveContainer" containerID="9446922302b5b87fbf8fa990b571e9a5d37f98a8f6f6263ae0cb03e5a4cf4812" Jan 30 13:45:08 crc kubenswrapper[4793]: I0130 13:45:08.998814 4793 scope.go:117] "RemoveContainer" containerID="95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d" Jan 30 13:45:08 crc kubenswrapper[4793]: E0130 13:45:08.999082 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2ssnl_openshift-multus(3e8d16db-eb58-4895-8c24-47d6f12b1ea4)\"" pod="openshift-multus/multus-2ssnl" podUID="3e8d16db-eb58-4895-8c24-47d6f12b1ea4" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.044870 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.046022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.046069 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.046086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.046096 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.046278 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=22.046251114 podStartE2EDuration="22.046251114s" podCreationTimestamp="2026-01-30 13:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.030836197 +0000 UTC m=+119.732184708" watchObservedRunningTime="2026-01-30 13:45:09.046251114 +0000 UTC m=+119.747599605" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.097994 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-nsxfs" podStartSLOduration=97.097969586 podStartE2EDuration="1m37.097969586s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.068177849 +0000 UTC m=+119.769526370" watchObservedRunningTime="2026-01-30 13:45:09.097969586 +0000 UTC m=+119.799318097" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.137834 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=97.137813572 podStartE2EDuration="1m37.137813572s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.117411066 +0000 UTC m=+119.818759577" watchObservedRunningTime="2026-01-30 13:45:09.137813572 +0000 UTC m=+119.839162073" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.149388 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.149600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.149737 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.149810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.149881 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.162373 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mbqcp" podStartSLOduration=98.162355303 podStartE2EDuration="1m38.162355303s" podCreationTimestamp="2026-01-30 13:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.147776578 +0000 UTC m=+119.849125069" watchObservedRunningTime="2026-01-30 13:45:09.162355303 +0000 UTC m=+119.863703794" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.175738 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podStartSLOduration=98.175716478 podStartE2EDuration="1m38.175716478s" podCreationTimestamp="2026-01-30 13:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.163659527 +0000 UTC m=+119.865008028" watchObservedRunningTime="2026-01-30 13:45:09.175716478 +0000 UTC m=+119.877064969" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.202584 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-pxcll" podStartSLOduration=97.202555139 podStartE2EDuration="1m37.202555139s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.202096567 +0000 UTC m=+119.903445078" watchObservedRunningTime="2026-01-30 13:45:09.202555139 +0000 UTC m=+119.903903630" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.224347 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=69.22432726 podStartE2EDuration="1m9.22432726s" podCreationTimestamp="2026-01-30 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.224185126 +0000 UTC m=+119.925533627" watchObservedRunningTime="2026-01-30 13:45:09.22432726 +0000 UTC m=+119.925675751" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.234011 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.233990559 podStartE2EDuration="28.233990559s" podCreationTimestamp="2026-01-30 13:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.233941707 +0000 UTC m=+119.935290198" watchObservedRunningTime="2026-01-30 13:45:09.233990559 +0000 UTC m=+119.935339050" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.249075 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=94.249041476 podStartE2EDuration="1m34.249041476s" podCreationTimestamp="2026-01-30 13:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.248982154 +0000 UTC m=+119.950330645" watchObservedRunningTime="2026-01-30 13:45:09.249041476 +0000 UTC m=+119.950389967" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.252035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.252082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.252100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.252114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.252123 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.354433 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.354678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.354806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.354941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.355140 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.382868 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:31:25.356830409 +0000 UTC Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.397314 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.397314 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:09 crc kubenswrapper[4793]: E0130 13:45:09.397665 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.397379 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.397434 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:09 crc kubenswrapper[4793]: E0130 13:45:09.397818 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:09 crc kubenswrapper[4793]: E0130 13:45:09.397900 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:09 crc kubenswrapper[4793]: E0130 13:45:09.397966 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.458635 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.458676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.458687 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.458702 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.458712 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.561285 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.561326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.561351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.561374 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.561389 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.664973 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.665042 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.665099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.665130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.665150 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.767877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.767940 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.767959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.767982 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.767999 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.872382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.872445 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.872462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.872485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.872504 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.975404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.975478 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.975490 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.975507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:09 crc kubenswrapper[4793]: I0130 13:45:09.975519 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:09Z","lastTransitionTime":"2026-01-30T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.003615 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/1.log" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.077531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.077839 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.077937 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.078034 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.078126 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:10Z","lastTransitionTime":"2026-01-30T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.180829 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.180894 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.180905 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.180917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.180929 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:10Z","lastTransitionTime":"2026-01-30T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.283338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.283373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.283402 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.283418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.283429 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:10Z","lastTransitionTime":"2026-01-30T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:10 crc kubenswrapper[4793]: I0130 13:45:10.384854 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:30:00.403472006 +0000 UTC Jan 30 13:45:10 crc kubenswrapper[4793]: E0130 13:45:10.384897 4793 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 13:45:10 crc kubenswrapper[4793]: E0130 13:45:10.487996 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.385174 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:48:24.650629195 +0000 UTC Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.397620 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.397660 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.397696 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.397707 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:11 crc kubenswrapper[4793]: E0130 13:45:11.397781 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:11 crc kubenswrapper[4793]: E0130 13:45:11.397849 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:11 crc kubenswrapper[4793]: E0130 13:45:11.397898 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:11 crc kubenswrapper[4793]: E0130 13:45:11.397941 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.400849 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.400873 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.400890 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.400903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.400913 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:45:11Z","lastTransitionTime":"2026-01-30T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.437011 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hb9pr" podStartSLOduration=98.436987144 podStartE2EDuration="1m38.436987144s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:09.30943217 +0000 UTC m=+120.010780651" watchObservedRunningTime="2026-01-30 13:45:11.436987144 +0000 UTC m=+122.138335635" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.437483 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms"] Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.437935 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.440305 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.440319 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.441112 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.442763 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.503665 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c1b5cd2-75f2-4d59-99f5-3ea731377918-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.503718 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c1b5cd2-75f2-4d59-99f5-3ea731377918-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.503746 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c1b5cd2-75f2-4d59-99f5-3ea731377918-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.503809 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6c1b5cd2-75f2-4d59-99f5-3ea731377918-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.503842 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6c1b5cd2-75f2-4d59-99f5-3ea731377918-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.604797 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c1b5cd2-75f2-4d59-99f5-3ea731377918-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.604861 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c1b5cd2-75f2-4d59-99f5-3ea731377918-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.604894 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c1b5cd2-75f2-4d59-99f5-3ea731377918-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.604954 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6c1b5cd2-75f2-4d59-99f5-3ea731377918-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.604986 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6c1b5cd2-75f2-4d59-99f5-3ea731377918-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.605178 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6c1b5cd2-75f2-4d59-99f5-3ea731377918-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.605240 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6c1b5cd2-75f2-4d59-99f5-3ea731377918-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.605939 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c1b5cd2-75f2-4d59-99f5-3ea731377918-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.609805 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c1b5cd2-75f2-4d59-99f5-3ea731377918-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.636841 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c1b5cd2-75f2-4d59-99f5-3ea731377918-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gqqms\" (UID: \"6c1b5cd2-75f2-4d59-99f5-3ea731377918\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: I0130 13:45:11.753606 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" Jan 30 13:45:11 crc kubenswrapper[4793]: W0130 13:45:11.774564 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c1b5cd2_75f2_4d59_99f5_3ea731377918.slice/crio-d23c7de44a4aaf5a362c76c6179c26d95beb68f2bf13a3828a3180e8cc545473 WatchSource:0}: Error finding container d23c7de44a4aaf5a362c76c6179c26d95beb68f2bf13a3828a3180e8cc545473: Status 404 returned error can't find the container with id d23c7de44a4aaf5a362c76c6179c26d95beb68f2bf13a3828a3180e8cc545473 Jan 30 13:45:12 crc kubenswrapper[4793]: I0130 13:45:12.011575 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" event={"ID":"6c1b5cd2-75f2-4d59-99f5-3ea731377918","Type":"ContainerStarted","Data":"99a536b8a2bdd47c0042557739c6ab73621e64b427a46087402619c292519bf1"} Jan 30 13:45:12 crc kubenswrapper[4793]: I0130 13:45:12.011636 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" event={"ID":"6c1b5cd2-75f2-4d59-99f5-3ea731377918","Type":"ContainerStarted","Data":"d23c7de44a4aaf5a362c76c6179c26d95beb68f2bf13a3828a3180e8cc545473"} Jan 30 13:45:12 crc kubenswrapper[4793]: I0130 13:45:12.026198 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gqqms" podStartSLOduration=100.026177646 podStartE2EDuration="1m40.026177646s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:12.025465567 +0000 UTC m=+122.726814088" watchObservedRunningTime="2026-01-30 13:45:12.026177646 +0000 UTC m=+122.727526137" Jan 30 13:45:12 crc kubenswrapper[4793]: I0130 13:45:12.386337 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:37:18.066101981 +0000 UTC Jan 30 13:45:12 crc kubenswrapper[4793]: I0130 13:45:12.386385 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 13:45:12 crc kubenswrapper[4793]: I0130 13:45:12.395216 4793 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 13:45:13 crc kubenswrapper[4793]: I0130 13:45:13.397707 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:13 crc kubenswrapper[4793]: I0130 13:45:13.397741 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:13 crc kubenswrapper[4793]: I0130 13:45:13.397706 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:13 crc kubenswrapper[4793]: E0130 13:45:13.397834 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:13 crc kubenswrapper[4793]: I0130 13:45:13.397817 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:13 crc kubenswrapper[4793]: E0130 13:45:13.397927 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:13 crc kubenswrapper[4793]: E0130 13:45:13.397977 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:13 crc kubenswrapper[4793]: E0130 13:45:13.398076 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:15 crc kubenswrapper[4793]: I0130 13:45:15.397680 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:15 crc kubenswrapper[4793]: I0130 13:45:15.397764 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:15 crc kubenswrapper[4793]: I0130 13:45:15.397675 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:15 crc kubenswrapper[4793]: E0130 13:45:15.397804 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:15 crc kubenswrapper[4793]: E0130 13:45:15.397888 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:15 crc kubenswrapper[4793]: E0130 13:45:15.397959 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:15 crc kubenswrapper[4793]: I0130 13:45:15.398732 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:15 crc kubenswrapper[4793]: E0130 13:45:15.398905 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:15 crc kubenswrapper[4793]: E0130 13:45:15.488916 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:45:17 crc kubenswrapper[4793]: I0130 13:45:17.397883 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:17 crc kubenswrapper[4793]: E0130 13:45:17.398304 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:17 crc kubenswrapper[4793]: I0130 13:45:17.397962 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:17 crc kubenswrapper[4793]: E0130 13:45:17.398391 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:17 crc kubenswrapper[4793]: I0130 13:45:17.397979 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:17 crc kubenswrapper[4793]: E0130 13:45:17.398452 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:17 crc kubenswrapper[4793]: I0130 13:45:17.397930 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:17 crc kubenswrapper[4793]: E0130 13:45:17.398514 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:19 crc kubenswrapper[4793]: I0130 13:45:19.397985 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:19 crc kubenswrapper[4793]: E0130 13:45:19.398542 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:19 crc kubenswrapper[4793]: I0130 13:45:19.398041 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:19 crc kubenswrapper[4793]: E0130 13:45:19.398696 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:19 crc kubenswrapper[4793]: I0130 13:45:19.398138 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:19 crc kubenswrapper[4793]: E0130 13:45:19.398796 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:19 crc kubenswrapper[4793]: I0130 13:45:19.398002 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:19 crc kubenswrapper[4793]: E0130 13:45:19.398887 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:20 crc kubenswrapper[4793]: E0130 13:45:20.489479 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:45:21 crc kubenswrapper[4793]: I0130 13:45:21.397679 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:21 crc kubenswrapper[4793]: I0130 13:45:21.397730 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:21 crc kubenswrapper[4793]: I0130 13:45:21.397856 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:21 crc kubenswrapper[4793]: E0130 13:45:21.397849 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:21 crc kubenswrapper[4793]: I0130 13:45:21.397967 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:21 crc kubenswrapper[4793]: E0130 13:45:21.398187 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:21 crc kubenswrapper[4793]: E0130 13:45:21.398212 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:21 crc kubenswrapper[4793]: E0130 13:45:21.398568 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:22 crc kubenswrapper[4793]: I0130 13:45:22.398911 4793 scope.go:117] "RemoveContainer" containerID="95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d" Jan 30 13:45:23 crc kubenswrapper[4793]: I0130 13:45:23.048359 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/1.log" Jan 30 13:45:23 crc kubenswrapper[4793]: I0130 13:45:23.048405 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ssnl" event={"ID":"3e8d16db-eb58-4895-8c24-47d6f12b1ea4","Type":"ContainerStarted","Data":"bfdf4f4d87575310b5571ad8d96eada9a0f6637ad77b4d2c2367210b2d703abd"} Jan 30 13:45:23 crc kubenswrapper[4793]: I0130 13:45:23.069336 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2ssnl" podStartSLOduration=111.069307134 podStartE2EDuration="1m51.069307134s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:23.067732213 +0000 UTC m=+133.769080744" watchObservedRunningTime="2026-01-30 13:45:23.069307134 +0000 UTC m=+133.770655665" Jan 30 13:45:23 crc kubenswrapper[4793]: I0130 13:45:23.397676 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:23 crc kubenswrapper[4793]: I0130 13:45:23.397700 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:23 crc kubenswrapper[4793]: I0130 13:45:23.397805 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:23 crc kubenswrapper[4793]: E0130 13:45:23.397797 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:23 crc kubenswrapper[4793]: I0130 13:45:23.397911 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:23 crc kubenswrapper[4793]: E0130 13:45:23.398018 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:23 crc kubenswrapper[4793]: E0130 13:45:23.398141 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:23 crc kubenswrapper[4793]: E0130 13:45:23.398206 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:23 crc kubenswrapper[4793]: I0130 13:45:23.398860 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:45:23 crc kubenswrapper[4793]: E0130 13:45:23.399024 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-g62p5_openshift-ovn-kubernetes(5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" Jan 30 13:45:25 crc kubenswrapper[4793]: I0130 13:45:25.397436 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:25 crc kubenswrapper[4793]: I0130 13:45:25.397534 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:25 crc kubenswrapper[4793]: I0130 13:45:25.397627 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:25 crc kubenswrapper[4793]: E0130 13:45:25.397632 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:25 crc kubenswrapper[4793]: E0130 13:45:25.397784 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:25 crc kubenswrapper[4793]: I0130 13:45:25.397831 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:25 crc kubenswrapper[4793]: E0130 13:45:25.397952 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:25 crc kubenswrapper[4793]: E0130 13:45:25.398098 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:25 crc kubenswrapper[4793]: E0130 13:45:25.491002 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:45:27 crc kubenswrapper[4793]: I0130 13:45:27.397894 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:27 crc kubenswrapper[4793]: I0130 13:45:27.397935 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:27 crc kubenswrapper[4793]: E0130 13:45:27.398846 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:27 crc kubenswrapper[4793]: I0130 13:45:27.397982 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:27 crc kubenswrapper[4793]: I0130 13:45:27.397982 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:27 crc kubenswrapper[4793]: E0130 13:45:27.399159 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:27 crc kubenswrapper[4793]: E0130 13:45:27.399177 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:27 crc kubenswrapper[4793]: E0130 13:45:27.399275 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:29 crc kubenswrapper[4793]: I0130 13:45:29.398246 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:29 crc kubenswrapper[4793]: I0130 13:45:29.398283 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:29 crc kubenswrapper[4793]: I0130 13:45:29.398290 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:29 crc kubenswrapper[4793]: E0130 13:45:29.399436 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:29 crc kubenswrapper[4793]: E0130 13:45:29.399290 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:29 crc kubenswrapper[4793]: I0130 13:45:29.398407 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:29 crc kubenswrapper[4793]: E0130 13:45:29.399589 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:29 crc kubenswrapper[4793]: E0130 13:45:29.399735 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:30 crc kubenswrapper[4793]: E0130 13:45:30.491636 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:45:31 crc kubenswrapper[4793]: I0130 13:45:31.397245 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:31 crc kubenswrapper[4793]: I0130 13:45:31.397269 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:31 crc kubenswrapper[4793]: E0130 13:45:31.397611 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:31 crc kubenswrapper[4793]: I0130 13:45:31.397283 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:31 crc kubenswrapper[4793]: E0130 13:45:31.397707 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:31 crc kubenswrapper[4793]: I0130 13:45:31.397287 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:31 crc kubenswrapper[4793]: E0130 13:45:31.397858 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:31 crc kubenswrapper[4793]: E0130 13:45:31.397795 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:33 crc kubenswrapper[4793]: I0130 13:45:33.397811 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:33 crc kubenswrapper[4793]: I0130 13:45:33.397872 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:33 crc kubenswrapper[4793]: I0130 13:45:33.397900 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:33 crc kubenswrapper[4793]: I0130 13:45:33.397839 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:33 crc kubenswrapper[4793]: E0130 13:45:33.397965 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:33 crc kubenswrapper[4793]: E0130 13:45:33.398122 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:33 crc kubenswrapper[4793]: E0130 13:45:33.398186 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:33 crc kubenswrapper[4793]: E0130 13:45:33.398299 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:35 crc kubenswrapper[4793]: I0130 13:45:35.398220 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:35 crc kubenswrapper[4793]: I0130 13:45:35.398318 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:35 crc kubenswrapper[4793]: I0130 13:45:35.398350 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:35 crc kubenswrapper[4793]: I0130 13:45:35.398354 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:35 crc kubenswrapper[4793]: E0130 13:45:35.399201 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:35 crc kubenswrapper[4793]: E0130 13:45:35.399265 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:35 crc kubenswrapper[4793]: E0130 13:45:35.399439 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:35 crc kubenswrapper[4793]: E0130 13:45:35.399547 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:35 crc kubenswrapper[4793]: E0130 13:45:35.494106 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:45:37 crc kubenswrapper[4793]: I0130 13:45:37.397915 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:37 crc kubenswrapper[4793]: I0130 13:45:37.398107 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:37 crc kubenswrapper[4793]: I0130 13:45:37.398159 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:37 crc kubenswrapper[4793]: E0130 13:45:37.398297 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:37 crc kubenswrapper[4793]: I0130 13:45:37.398307 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:37 crc kubenswrapper[4793]: E0130 13:45:37.398739 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:37 crc kubenswrapper[4793]: E0130 13:45:37.398839 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:37 crc kubenswrapper[4793]: E0130 13:45:37.398910 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:37 crc kubenswrapper[4793]: I0130 13:45:37.399279 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:45:38 crc kubenswrapper[4793]: I0130 13:45:38.099033 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/3.log" Jan 30 13:45:38 crc kubenswrapper[4793]: I0130 13:45:38.102429 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerStarted","Data":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} Jan 30 13:45:38 crc kubenswrapper[4793]: I0130 13:45:38.103229 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:45:38 crc kubenswrapper[4793]: I0130 13:45:38.130602 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podStartSLOduration=126.130580603 podStartE2EDuration="2m6.130580603s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:38.129403812 +0000 UTC m=+148.830752323" watchObservedRunningTime="2026-01-30 13:45:38.130580603 +0000 UTC m=+148.831929104" Jan 30 13:45:38 crc kubenswrapper[4793]: I0130 13:45:38.681288 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xfcvw"] Jan 30 13:45:38 crc kubenswrapper[4793]: I0130 13:45:38.681629 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:38 crc kubenswrapper[4793]: E0130 13:45:38.681710 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:39 crc kubenswrapper[4793]: I0130 13:45:39.397754 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:39 crc kubenswrapper[4793]: I0130 13:45:39.397793 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:39 crc kubenswrapper[4793]: E0130 13:45:39.397916 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:39 crc kubenswrapper[4793]: I0130 13:45:39.397973 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:39 crc kubenswrapper[4793]: E0130 13:45:39.398080 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:39 crc kubenswrapper[4793]: E0130 13:45:39.398142 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:40 crc kubenswrapper[4793]: I0130 13:45:40.398402 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:40 crc kubenswrapper[4793]: E0130 13:45:40.399627 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:40 crc kubenswrapper[4793]: E0130 13:45:40.494507 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:45:41 crc kubenswrapper[4793]: I0130 13:45:41.322323 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.322567 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:47:43.322532907 +0000 UTC m=+274.023881398 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:41 crc kubenswrapper[4793]: I0130 13:45:41.322664 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:41 crc kubenswrapper[4793]: I0130 13:45:41.322706 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:41 crc kubenswrapper[4793]: I0130 13:45:41.322758 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.322885 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.322918 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.322945 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.323004 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.323029 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.322957 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:47:43.322947218 +0000 UTC m=+274.024295709 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.323139 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:47:43.323109622 +0000 UTC m=+274.024458143 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.323169 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:47:43.323157443 +0000 UTC m=+274.024505964 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:45:41 crc kubenswrapper[4793]: I0130 13:45:41.397751 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:41 crc kubenswrapper[4793]: I0130 13:45:41.397879 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.398384 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:41 crc kubenswrapper[4793]: I0130 13:45:41.397912 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.398847 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.400005 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:41 crc kubenswrapper[4793]: I0130 13:45:41.423730 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.423948 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.423978 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.423993 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:45:41 crc kubenswrapper[4793]: E0130 13:45:41.424106 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:47:43.424059113 +0000 UTC m=+274.125407604 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:45:42 crc kubenswrapper[4793]: I0130 13:45:42.397786 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:42 crc kubenswrapper[4793]: E0130 13:45:42.398206 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:42 crc kubenswrapper[4793]: I0130 13:45:42.413352 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:45:42 crc kubenswrapper[4793]: I0130 13:45:42.413422 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:45:43 crc kubenswrapper[4793]: I0130 13:45:43.397761 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:43 crc kubenswrapper[4793]: I0130 13:45:43.397824 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:43 crc kubenswrapper[4793]: E0130 13:45:43.397906 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:43 crc kubenswrapper[4793]: I0130 13:45:43.397787 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:43 crc kubenswrapper[4793]: E0130 13:45:43.398024 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:43 crc kubenswrapper[4793]: E0130 13:45:43.398122 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:44 crc kubenswrapper[4793]: I0130 13:45:44.397798 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:44 crc kubenswrapper[4793]: E0130 13:45:44.398109 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xfcvw" podUID="3401bbdc-090b-402b-bf7b-a4a823182946" Jan 30 13:45:45 crc kubenswrapper[4793]: I0130 13:45:45.397412 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:45 crc kubenswrapper[4793]: I0130 13:45:45.397465 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:45 crc kubenswrapper[4793]: I0130 13:45:45.397502 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:45 crc kubenswrapper[4793]: E0130 13:45:45.397569 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:45:45 crc kubenswrapper[4793]: E0130 13:45:45.397730 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:45:45 crc kubenswrapper[4793]: E0130 13:45:45.397815 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:45:46 crc kubenswrapper[4793]: I0130 13:45:46.397986 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:46 crc kubenswrapper[4793]: I0130 13:45:46.400484 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 13:45:46 crc kubenswrapper[4793]: I0130 13:45:46.409905 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 13:45:47 crc kubenswrapper[4793]: I0130 13:45:47.401483 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:45:47 crc kubenswrapper[4793]: I0130 13:45:47.401483 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:45:47 crc kubenswrapper[4793]: I0130 13:45:47.401488 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:45:47 crc kubenswrapper[4793]: I0130 13:45:47.403746 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 13:45:47 crc kubenswrapper[4793]: I0130 13:45:47.404270 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 13:45:47 crc kubenswrapper[4793]: I0130 13:45:47.404747 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 13:45:47 crc kubenswrapper[4793]: I0130 13:45:47.404796 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 13:45:51 crc kubenswrapper[4793]: I0130 13:45:51.274587 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.487417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.529294 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.529856 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.530428 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.530974 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.532337 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.532491 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.532632 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-65rgb"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.533032 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.533470 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ztcbh"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.534025 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.535263 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.535690 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.536407 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cwwfj"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.536568 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.536840 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.536925 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.537369 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-sd6hs"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.537769 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.539732 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zrj8g"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.540114 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-kknzc"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.540203 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.540394 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.541092 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.549761 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-client\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.549851 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-ca\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.549896 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-config\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.549932 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs25n\" (UniqueName: \"kubernetes.io/projected/ea703d52-c081-418f-9343-61b68296314f-kube-api-access-qs25n\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.549960 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-trusted-ca\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.549988 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3806824c-28d3-47d4-b33f-01d9ab1239b8-serving-cert\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550013 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl8wz\" (UniqueName: \"kubernetes.io/projected/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-kube-api-access-wl8wz\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550040 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-service-ca\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550223 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-encryption-config\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550327 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-etcd-client\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550351 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550619 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-image-import-ca\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550649 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-client-ca\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550679 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99444dfd-71c4-4d2d-a94a-cecc7a740423-metrics-tls\") pod \"dns-operator-744455d44c-ztcbh\" (UID: \"99444dfd-71c4-4d2d-a94a-cecc7a740423\") " pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550700 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nhrw\" (UniqueName: \"kubernetes.io/projected/99444dfd-71c4-4d2d-a94a-cecc7a740423-kube-api-access-5nhrw\") pod \"dns-operator-744455d44c-ztcbh\" (UID: \"99444dfd-71c4-4d2d-a94a-cecc7a740423\") " pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550724 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-serving-cert\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550948 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-config\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.550980 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-audit\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551039 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mhtj\" (UniqueName: \"kubernetes.io/projected/7dbc78d6-c879-4284-89b6-169d359839bf-kube-api-access-9mhtj\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551103 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-serving-cert\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551179 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ea703d52-c081-418f-9343-61b68296314f-node-pullsecrets\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551258 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-oauth-config\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551294 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-config\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551355 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4z45\" (UniqueName: \"kubernetes.io/projected/3806824c-28d3-47d4-b33f-01d9ab1239b8-kube-api-access-n4z45\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551376 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea703d52-c081-418f-9343-61b68296314f-audit-dir\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551434 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-serving-cert\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551458 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9kbq\" (UniqueName: \"kubernetes.io/projected/6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2-kube-api-access-r9kbq\") pod \"downloads-7954f5f757-sd6hs\" (UID: \"6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2\") " pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551522 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-config\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551544 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbc78d6-c879-4284-89b6-169d359839bf-serving-cert\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.551607 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-etcd-serving-ca\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.556999 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.557289 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.557555 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.558311 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.558495 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.557116 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.563023 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.563428 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.563551 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.567906 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.567990 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.567910 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.568418 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.568530 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.568695 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.569315 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.569826 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.570699 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.572190 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.576457 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.576779 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.587435 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.587611 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.589067 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-56g7n"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.589537 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.591358 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.591777 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.591941 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592188 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592253 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592311 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592526 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592567 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592670 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592764 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592850 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.592987 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.593169 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.594359 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.594440 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.594600 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.594647 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.594691 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.594898 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.595109 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.595122 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.595241 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.595400 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.595553 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.595738 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.595932 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.596135 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.596731 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.598752 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.599232 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.599701 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qsdzw"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.599907 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.599958 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.600203 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.600470 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.603272 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.605798 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5l76j"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.606255 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.606845 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.608351 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.608816 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.609113 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.609470 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.612997 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-899ps"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.613710 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.614093 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.614143 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.615302 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.615416 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2mcj"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.615837 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.615963 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.616408 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.617355 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.617621 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.617705 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.618163 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.619477 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-2lv2p"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.619954 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.620599 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.622674 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pfnjs"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.623143 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.625512 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.630559 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v476x"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.631306 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.631689 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.631942 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.632233 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.632820 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.634380 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.634386 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.649197 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.649293 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.649434 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.650184 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.650427 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.650667 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.650905 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.651281 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.651445 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.651670 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.651915 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.652187 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.652211 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.654309 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.654546 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mnzcq"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.655537 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.657430 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.660719 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-ca\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.660771 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.660811 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.660919 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b72b54ef-6699-4091-b47d-f05f7c85adb2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mnzcq\" (UID: \"b72b54ef-6699-4091-b47d-f05f7c85adb2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.660999 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-stats-auth\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661038 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661068 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mbjm\" (UniqueName: \"kubernetes.io/projected/b72b54ef-6699-4091-b47d-f05f7c85adb2-kube-api-access-2mbjm\") pod \"multus-admission-controller-857f4d67dd-mnzcq\" (UID: \"b72b54ef-6699-4091-b47d-f05f7c85adb2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661115 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-encryption-config\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661158 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tk6n\" (UniqueName: \"kubernetes.io/projected/7fc1ca51-0362-4492-ba07-8c5413c39deb-kube-api-access-9tk6n\") pod \"cluster-samples-operator-665b6dd947-7x8ff\" (UID: \"7fc1ca51-0362-4492-ba07-8c5413c39deb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661246 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661357 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-serving-cert\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661397 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r94vd\" (UniqueName: \"kubernetes.io/projected/d2aa0043-dc77-41ca-a95f-2d119ed48053-kube-api-access-r94vd\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661416 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-default-certificate\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661463 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661585 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w2cd\" (UniqueName: \"kubernetes.io/projected/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-kube-api-access-4w2cd\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661600 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-config\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661620 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-service-ca-bundle\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661651 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-metrics-certs\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661669 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-client-ca\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661687 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-dir\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661701 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661815 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.662833 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-config\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.664119 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-ca\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.664908 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.661814 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-config\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.665471 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.665486 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7fc1ca51-0362-4492-ba07-8c5413c39deb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7x8ff\" (UID: \"7fc1ca51-0362-4492-ba07-8c5413c39deb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.665582 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46caba5b-4a87-480a-ac56-437102a31802-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.665663 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.665692 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/268883cf-a27e-4b69-bd41-18f0a35c3e6a-serving-cert\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.665848 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs25n\" (UniqueName: \"kubernetes.io/projected/ea703d52-c081-418f-9343-61b68296314f-kube-api-access-qs25n\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.665936 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kpjf\" (UniqueName: \"kubernetes.io/projected/e2a53aac-c9f7-465c-821b-cd62aa893d13-kube-api-access-9kpjf\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.665994 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfgsg\" (UniqueName: \"kubernetes.io/projected/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-kube-api-access-wfgsg\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666023 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-etcd-client\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666174 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46caba5b-4a87-480a-ac56-437102a31802-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666241 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-policies\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666285 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666344 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-trusted-ca\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666369 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnpxc\" (UniqueName: \"kubernetes.io/projected/46caba5b-4a87-480a-ac56-437102a31802-kube-api-access-lnpxc\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666516 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666448 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/daa9599a-67b0-421e-8add-0656c0b98af2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666889 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2pcw\" (UniqueName: \"kubernetes.io/projected/daa9599a-67b0-421e-8add-0656c0b98af2-kube-api-access-p2pcw\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.666954 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3806824c-28d3-47d4-b33f-01d9ab1239b8-serving-cert\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.667001 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-config\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.667109 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl8wz\" (UniqueName: \"kubernetes.io/projected/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-kube-api-access-wl8wz\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.667154 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-service-ca\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.668592 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.669298 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.669322 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.669396 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.669444 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.669543 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.669555 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.699835 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.700005 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.700482 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-trusted-ca\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.701429 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.701579 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.702314 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.702347 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zd5lq"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.702416 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.702487 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.702600 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.702824 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.702842 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.703094 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.703143 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.703352 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.703814 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704136 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704250 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.685360 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51800ff9-fe19-4a50-a272-be1de629ec82-config\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704527 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf6fx\" (UniqueName: \"kubernetes.io/projected/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-kube-api-access-jf6fx\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704549 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-auth-proxy-config\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704570 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-audit-policies\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704587 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd7922e2-3b17-4212-94b3-2405e20841ad-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704607 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-service-ca\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704623 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-encryption-config\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704638 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704658 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-etcd-client\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704672 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704686 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704702 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a53aac-c9f7-465c-821b-cd62aa893d13-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704731 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-image-import-ca\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704746 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82ft4\" (UniqueName: \"kubernetes.io/projected/4e62edf8-f827-4fa6-8b40-563c821707ae-kube-api-access-82ft4\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704763 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704780 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704814 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlvm8\" (UniqueName: \"kubernetes.io/projected/c44b9aaf-de3a-48a8-8760-5553255887ac-kube-api-access-jlvm8\") pod \"migrator-59844c95c7-q5442\" (UID: \"c44b9aaf-de3a-48a8-8760-5553255887ac\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704829 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1faa169d-53de-456e-8f99-f93dc2772719-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704842 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce07df7-af19-4334-b704-818df47958a1-serving-cert\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704860 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-client-ca\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704890 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704915 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-config\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704932 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99444dfd-71c4-4d2d-a94a-cecc7a740423-metrics-tls\") pod \"dns-operator-744455d44c-ztcbh\" (UID: \"99444dfd-71c4-4d2d-a94a-cecc7a740423\") " pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704947 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nhrw\" (UniqueName: \"kubernetes.io/projected/99444dfd-71c4-4d2d-a94a-cecc7a740423-kube-api-access-5nhrw\") pod \"dns-operator-744455d44c-ztcbh\" (UID: \"99444dfd-71c4-4d2d-a94a-cecc7a740423\") " pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704963 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704976 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-config\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.704989 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmq77\" (UniqueName: \"kubernetes.io/projected/268883cf-a27e-4b69-bd41-18f0a35c3e6a-kube-api-access-xmq77\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705003 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd7922e2-3b17-4212-94b3-2405e20841ad-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705033 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-serving-cert\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705060 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705077 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51800ff9-fe19-4a50-a272-be1de629ec82-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705091 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vhgb\" (UniqueName: \"kubernetes.io/projected/4a64abca-3318-4208-8edb-1474e0ba5f2f-kube-api-access-4vhgb\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705110 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-config\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705123 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-audit\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705138 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2aa0043-dc77-41ca-a95f-2d119ed48053-audit-dir\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705153 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-config\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705170 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mhtj\" (UniqueName: \"kubernetes.io/projected/7dbc78d6-c879-4284-89b6-169d359839bf-kube-api-access-9mhtj\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705186 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-serving-cert\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705209 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-trusted-ca-bundle\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705224 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-machine-approver-tls\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705239 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckwgj\" (UniqueName: \"kubernetes.io/projected/4ce07df7-af19-4334-b704-818df47958a1-kube-api-access-ckwgj\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705255 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ea703d52-c081-418f-9343-61b68296314f-node-pullsecrets\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705268 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e62edf8-f827-4fa6-8b40-563c821707ae-serving-cert\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705281 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/daa9599a-67b0-421e-8add-0656c0b98af2-trusted-ca\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705296 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-oauth-config\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705321 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-config\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705354 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4z45\" (UniqueName: \"kubernetes.io/projected/3806824c-28d3-47d4-b33f-01d9ab1239b8-kube-api-access-n4z45\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705378 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea703d52-c081-418f-9343-61b68296314f-audit-dir\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705402 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705427 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705446 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2a53aac-c9f7-465c-821b-cd62aa893d13-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705466 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd7922e2-3b17-4212-94b3-2405e20841ad-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705491 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-serving-cert\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705511 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9kbq\" (UniqueName: \"kubernetes.io/projected/6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2-kube-api-access-r9kbq\") pod \"downloads-7954f5f757-sd6hs\" (UID: \"6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2\") " pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705530 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-oauth-serving-cert\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705548 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51800ff9-fe19-4a50-a272-be1de629ec82-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705571 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1faa169d-53de-456e-8f99-f93dc2772719-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705595 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-config\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705614 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbc78d6-c879-4284-89b6-169d359839bf-serving-cert\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705635 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1faa169d-53de-456e-8f99-f93dc2772719-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705660 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-etcd-serving-ca\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705687 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlclv\" (UniqueName: \"kubernetes.io/projected/cd7922e2-3b17-4212-94b3-2405e20841ad-kube-api-access-wlclv\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705709 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-client\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705729 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-service-ca-bundle\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705751 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/daa9599a-67b0-421e-8add-0656c0b98af2-metrics-tls\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.705772 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4ce07df7-af19-4334-b704-818df47958a1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.706440 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-service-ca\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.706837 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.711172 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.713616 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.714603 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.715037 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.715286 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.715397 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.716793 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.717061 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.720648 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ea703d52-c081-418f-9343-61b68296314f-node-pullsecrets\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.721858 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3806824c-28d3-47d4-b33f-01d9ab1239b8-config\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.722309 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-audit\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.723125 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-etcd-serving-ca\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.724960 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.725410 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n9v6k"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.725697 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.726023 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.727714 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbc78d6-c879-4284-89b6-169d359839bf-serving-cert\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.727847 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea703d52-c081-418f-9343-61b68296314f-audit-dir\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.728316 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.728459 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.729462 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-config\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.731488 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.731525 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-image-import-ca\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.731904 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.732404 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-client-ca\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.732636 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.733329 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.734134 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-etcd-client\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.735744 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gsr67"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.736405 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5l76j"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.736484 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.736684 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.737144 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-encryption-config\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.738208 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.738384 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.738638 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3806824c-28d3-47d4-b33f-01d9ab1239b8-etcd-client\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.739080 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea703d52-c081-418f-9343-61b68296314f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.739571 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-config\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.739624 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zrj8g"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.739851 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-serving-cert\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.742015 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-988dg"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.744673 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-oauth-config\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.745644 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.745664 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.745721 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qsdzw"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.745802 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.745869 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-serving-cert\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.746610 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea703d52-c081-418f-9343-61b68296314f-serving-cert\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.746651 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cwwfj"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.748964 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3806824c-28d3-47d4-b33f-01d9ab1239b8-serving-cert\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.749037 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ztcbh"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.749580 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99444dfd-71c4-4d2d-a94a-cecc7a740423-metrics-tls\") pod \"dns-operator-744455d44c-ztcbh\" (UID: \"99444dfd-71c4-4d2d-a94a-cecc7a740423\") " pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.752364 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2mcj"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.753988 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-65rgb"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.754483 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.755274 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.766807 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mnzcq"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.766876 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kknzc"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.773748 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.782407 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.784622 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.789109 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-56g7n"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.799741 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807410 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807470 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807503 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b72b54ef-6699-4091-b47d-f05f7c85adb2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mnzcq\" (UID: \"b72b54ef-6699-4091-b47d-f05f7c85adb2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807530 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-stats-auth\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807558 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-encryption-config\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807586 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tk6n\" (UniqueName: \"kubernetes.io/projected/7fc1ca51-0362-4492-ba07-8c5413c39deb-kube-api-access-9tk6n\") pod \"cluster-samples-operator-665b6dd947-7x8ff\" (UID: \"7fc1ca51-0362-4492-ba07-8c5413c39deb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807616 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807640 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807669 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mbjm\" (UniqueName: \"kubernetes.io/projected/b72b54ef-6699-4091-b47d-f05f7c85adb2-kube-api-access-2mbjm\") pod \"multus-admission-controller-857f4d67dd-mnzcq\" (UID: \"b72b54ef-6699-4091-b47d-f05f7c85adb2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807697 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r94vd\" (UniqueName: \"kubernetes.io/projected/d2aa0043-dc77-41ca-a95f-2d119ed48053-kube-api-access-r94vd\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807725 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-default-certificate\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807746 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807777 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-serving-cert\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807804 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w2cd\" (UniqueName: \"kubernetes.io/projected/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-kube-api-access-4w2cd\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807830 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-config\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807863 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-service-ca-bundle\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807886 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-metrics-certs\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807923 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-client-ca\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807953 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7fc1ca51-0362-4492-ba07-8c5413c39deb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7x8ff\" (UID: \"7fc1ca51-0362-4492-ba07-8c5413c39deb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.807981 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46caba5b-4a87-480a-ac56-437102a31802-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808002 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808027 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-dir\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808073 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808103 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/268883cf-a27e-4b69-bd41-18f0a35c3e6a-serving-cert\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808139 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kpjf\" (UniqueName: \"kubernetes.io/projected/e2a53aac-c9f7-465c-821b-cd62aa893d13-kube-api-access-9kpjf\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808176 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46caba5b-4a87-480a-ac56-437102a31802-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808201 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-policies\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808227 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808255 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfgsg\" (UniqueName: \"kubernetes.io/projected/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-kube-api-access-wfgsg\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808279 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-etcd-client\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808306 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnpxc\" (UniqueName: \"kubernetes.io/projected/46caba5b-4a87-480a-ac56-437102a31802-kube-api-access-lnpxc\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808334 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/daa9599a-67b0-421e-8add-0656c0b98af2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808361 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2pcw\" (UniqueName: \"kubernetes.io/projected/daa9599a-67b0-421e-8add-0656c0b98af2-kube-api-access-p2pcw\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808388 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-config\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808414 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-service-ca\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808439 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51800ff9-fe19-4a50-a272-be1de629ec82-config\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808465 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf6fx\" (UniqueName: \"kubernetes.io/projected/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-kube-api-access-jf6fx\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808503 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-auth-proxy-config\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808529 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-audit-policies\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808555 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd7922e2-3b17-4212-94b3-2405e20841ad-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808578 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808607 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808637 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a53aac-c9f7-465c-821b-cd62aa893d13-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808668 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82ft4\" (UniqueName: \"kubernetes.io/projected/4e62edf8-f827-4fa6-8b40-563c821707ae-kube-api-access-82ft4\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808693 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808743 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808772 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1faa169d-53de-456e-8f99-f93dc2772719-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808796 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlvm8\" (UniqueName: \"kubernetes.io/projected/c44b9aaf-de3a-48a8-8760-5553255887ac-kube-api-access-jlvm8\") pod \"migrator-59844c95c7-q5442\" (UID: \"c44b9aaf-de3a-48a8-8760-5553255887ac\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808822 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808849 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-config\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808876 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce07df7-af19-4334-b704-818df47958a1-serving-cert\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808911 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808936 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-config\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.808974 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51800ff9-fe19-4a50-a272-be1de629ec82-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809025 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vhgb\" (UniqueName: \"kubernetes.io/projected/4a64abca-3318-4208-8edb-1474e0ba5f2f-kube-api-access-4vhgb\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809104 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmq77\" (UniqueName: \"kubernetes.io/projected/268883cf-a27e-4b69-bd41-18f0a35c3e6a-kube-api-access-xmq77\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809136 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd7922e2-3b17-4212-94b3-2405e20841ad-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809160 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2aa0043-dc77-41ca-a95f-2d119ed48053-audit-dir\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809201 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-config\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809244 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-machine-approver-tls\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809278 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckwgj\" (UniqueName: \"kubernetes.io/projected/4ce07df7-af19-4334-b704-818df47958a1-kube-api-access-ckwgj\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809319 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-trusted-ca-bundle\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809346 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/daa9599a-67b0-421e-8add-0656c0b98af2-trusted-ca\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809370 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e62edf8-f827-4fa6-8b40-563c821707ae-serving-cert\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809428 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809454 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809495 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-oauth-serving-cert\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809534 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2a53aac-c9f7-465c-821b-cd62aa893d13-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809558 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd7922e2-3b17-4212-94b3-2405e20841ad-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809595 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51800ff9-fe19-4a50-a272-be1de629ec82-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809630 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1faa169d-53de-456e-8f99-f93dc2772719-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809675 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1faa169d-53de-456e-8f99-f93dc2772719-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809706 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlclv\" (UniqueName: \"kubernetes.io/projected/cd7922e2-3b17-4212-94b3-2405e20841ad-kube-api-access-wlclv\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809733 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4ce07df7-af19-4334-b704-818df47958a1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809765 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-service-ca-bundle\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.809791 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/daa9599a-67b0-421e-8add-0656c0b98af2-metrics-tls\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.814652 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-encryption-config\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.815888 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.816614 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a53aac-c9f7-465c-821b-cd62aa893d13-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.825845 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-config\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.825884 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-serving-cert\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.827635 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-config\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.828218 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.828655 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.828879 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-client-ca\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.829422 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-service-ca\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.830277 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.830793 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-auth-proxy-config\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.831521 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.836541 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pfnjs"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.836588 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-899ps"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.836667 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.838337 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-audit-policies\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.843097 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46caba5b-4a87-480a-ac56-437102a31802-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.851897 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1faa169d-53de-456e-8f99-f93dc2772719-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.852847 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd7922e2-3b17-4212-94b3-2405e20841ad-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.855162 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.855797 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.856364 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-config\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.857277 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.857313 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gsr67"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.857326 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.866995 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2aa0043-dc77-41ca-a95f-2d119ed48053-audit-dir\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.869432 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1faa169d-53de-456e-8f99-f93dc2772719-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.870600 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.874805 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-machine-approver-tls\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.874828 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2aa0043-dc77-41ca-a95f-2d119ed48053-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.875346 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.875790 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd7922e2-3b17-4212-94b3-2405e20841ad-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.875835 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-oauth-serving-cert\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.876288 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce07df7-af19-4334-b704-818df47958a1-serving-cert\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.876337 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7fc1ca51-0362-4492-ba07-8c5413c39deb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7x8ff\" (UID: \"7fc1ca51-0362-4492-ba07-8c5413c39deb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.876666 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.876807 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-trusted-ca-bundle\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.877158 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4ce07df7-af19-4334-b704-818df47958a1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.877318 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e62edf8-f827-4fa6-8b40-563c821707ae-serving-cert\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.877467 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-config\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.877611 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d2aa0043-dc77-41ca-a95f-2d119ed48053-etcd-client\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.877628 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e62edf8-f827-4fa6-8b40-563c821707ae-service-ca-bundle\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.877900 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.878020 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.878058 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-dir\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.879034 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.880186 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46caba5b-4a87-480a-ac56-437102a31802-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.880497 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.882859 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.883156 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.885320 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2a53aac-c9f7-465c-821b-cd62aa893d13-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.885387 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4pnff"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.886239 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-2lf59"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.886602 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/268883cf-a27e-4b69-bd41-18f0a35c3e6a-serving-cert\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.886876 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4pnff" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.887420 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.887569 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.888725 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.893817 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.894851 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.898110 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n9v6k"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.899228 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-sd6hs"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.900292 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.903688 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.905189 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v476x"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.906867 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zd5lq"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.908752 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.910117 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.911967 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.912670 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2lf59"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.912773 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.913414 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.915230 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.917069 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.918103 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4pnff"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.919369 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm"] Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.932907 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.952151 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.960087 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-policies\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.972245 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.977381 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:52 crc kubenswrapper[4793]: I0130 13:45:52.992536 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.012845 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.031748 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.052285 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.072200 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.104670 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.112889 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51800ff9-fe19-4a50-a272-be1de629ec82-config\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.114009 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.131532 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-stats-auth\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.134167 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.137395 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-service-ca-bundle\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.152930 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.160193 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-metrics-certs\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.172129 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.192898 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.212632 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.224606 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-default-certificate\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.232364 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.252456 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.273212 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.293572 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.312368 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.320965 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51800ff9-fe19-4a50-a272-be1de629ec82-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.332025 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.343574 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/daa9599a-67b0-421e-8add-0656c0b98af2-metrics-tls\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.352370 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.372907 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.393441 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.417396 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.419258 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/daa9599a-67b0-421e-8add-0656c0b98af2-trusted-ca\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.432452 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.452587 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.472303 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.493220 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.512293 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.533305 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.552128 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.572490 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.581824 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.592707 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.597730 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-config\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.631845 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.653273 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.665399 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b72b54ef-6699-4091-b47d-f05f7c85adb2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mnzcq\" (UID: \"b72b54ef-6699-4091-b47d-f05f7c85adb2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.670969 4793 request.go:700] Waited for 1.002938905s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&limit=500&resourceVersion=0 Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.672750 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.693032 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.713623 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.732713 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.753529 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.773560 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.792600 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.833114 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs25n\" (UniqueName: \"kubernetes.io/projected/ea703d52-c081-418f-9343-61b68296314f-kube-api-access-qs25n\") pod \"apiserver-76f77b778f-cwwfj\" (UID: \"ea703d52-c081-418f-9343-61b68296314f\") " pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.840132 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.846229 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl8wz\" (UniqueName: \"kubernetes.io/projected/e8aacb4a-f044-427a-b5ef-1d4126b98a6a-kube-api-access-wl8wz\") pod \"console-operator-58897d9998-65rgb\" (UID: \"e8aacb4a-f044-427a-b5ef-1d4126b98a6a\") " pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.852512 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.872406 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.894396 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.912320 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.932750 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.957886 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 13:45:53 crc kubenswrapper[4793]: I0130 13:45:53.972765 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.031037 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mhtj\" (UniqueName: \"kubernetes.io/projected/7dbc78d6-c879-4284-89b6-169d359839bf-kube-api-access-9mhtj\") pod \"route-controller-manager-6576b87f9c-j5zhl\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.031703 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.032912 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.034671 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3401bbdc-090b-402b-bf7b-a4a823182946-metrics-certs\") pod \"network-metrics-daemon-xfcvw\" (UID: \"3401bbdc-090b-402b-bf7b-a4a823182946\") " pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.052099 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.068720 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.072170 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.079862 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cwwfj"] Jan 30 13:45:54 crc kubenswrapper[4793]: W0130 13:45:54.090567 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea703d52_c081_418f_9343_61b68296314f.slice/crio-9e2af840fa5b89adf95a0c581e72512f88e825192e31c92e8477d6a8c2e03dbc WatchSource:0}: Error finding container 9e2af840fa5b89adf95a0c581e72512f88e825192e31c92e8477d6a8c2e03dbc: Status 404 returned error can't find the container with id 9e2af840fa5b89adf95a0c581e72512f88e825192e31c92e8477d6a8c2e03dbc Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.091906 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.102660 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.112877 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.132743 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.153786 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.186923 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" event={"ID":"ea703d52-c081-418f-9343-61b68296314f","Type":"ContainerStarted","Data":"9e2af840fa5b89adf95a0c581e72512f88e825192e31c92e8477d6a8c2e03dbc"} Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.193857 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nhrw\" (UniqueName: \"kubernetes.io/projected/99444dfd-71c4-4d2d-a94a-cecc7a740423-kube-api-access-5nhrw\") pod \"dns-operator-744455d44c-ztcbh\" (UID: \"99444dfd-71c4-4d2d-a94a-cecc7a740423\") " pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.212207 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.219448 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xfcvw" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.233366 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.243875 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9kbq\" (UniqueName: \"kubernetes.io/projected/6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2-kube-api-access-r9kbq\") pod \"downloads-7954f5f757-sd6hs\" (UID: \"6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2\") " pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.252043 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.281139 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl"] Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.286245 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4z45\" (UniqueName: \"kubernetes.io/projected/3806824c-28d3-47d4-b33f-01d9ab1239b8-kube-api-access-n4z45\") pod \"etcd-operator-b45778765-zrj8g\" (UID: \"3806824c-28d3-47d4-b33f-01d9ab1239b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.295428 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.313813 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.339158 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.352268 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.372945 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.377777 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-65rgb"] Jan 30 13:45:54 crc kubenswrapper[4793]: W0130 13:45:54.384229 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8aacb4a_f044_427a_b5ef_1d4126b98a6a.slice/crio-f71e939b5f36fce3e31b153d25f807eb7ec599b25bcf56541b647f3d1836e225 WatchSource:0}: Error finding container f71e939b5f36fce3e31b153d25f807eb7ec599b25bcf56541b647f3d1836e225: Status 404 returned error can't find the container with id f71e939b5f36fce3e31b153d25f807eb7ec599b25bcf56541b647f3d1836e225 Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.391719 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.407566 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.412760 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.433304 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.453315 4793 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.459974 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xfcvw"] Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.461643 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.472580 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 13:45:54 crc kubenswrapper[4793]: W0130 13:45:54.481683 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3401bbdc_090b_402b_bf7b_a4a823182946.slice/crio-f3c0936a73e62807c0b874758a1c2db154a809b4096905dd8d5cb0c8738657fe WatchSource:0}: Error finding container f3c0936a73e62807c0b874758a1c2db154a809b4096905dd8d5cb0c8738657fe: Status 404 returned error can't find the container with id f3c0936a73e62807c0b874758a1c2db154a809b4096905dd8d5cb0c8738657fe Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.492028 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.512096 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.532762 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.552533 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.576754 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.580022 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ztcbh"] Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.594030 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tk6n\" (UniqueName: \"kubernetes.io/projected/7fc1ca51-0362-4492-ba07-8c5413c39deb-kube-api-access-9tk6n\") pod \"cluster-samples-operator-665b6dd947-7x8ff\" (UID: \"7fc1ca51-0362-4492-ba07-8c5413c39deb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.607179 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82ft4\" (UniqueName: \"kubernetes.io/projected/4e62edf8-f827-4fa6-8b40-563c821707ae-kube-api-access-82ft4\") pod \"authentication-operator-69f744f599-5l76j\" (UID: \"4e62edf8-f827-4fa6-8b40-563c821707ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.630302 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mbjm\" (UniqueName: \"kubernetes.io/projected/b72b54ef-6699-4091-b47d-f05f7c85adb2-kube-api-access-2mbjm\") pod \"multus-admission-controller-857f4d67dd-mnzcq\" (UID: \"b72b54ef-6699-4091-b47d-f05f7c85adb2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.649690 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r94vd\" (UniqueName: \"kubernetes.io/projected/d2aa0043-dc77-41ca-a95f-2d119ed48053-kube-api-access-r94vd\") pod \"apiserver-7bbb656c7d-9s5tx\" (UID: \"d2aa0043-dc77-41ca-a95f-2d119ed48053\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.653544 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.667872 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-sd6hs"] Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.672375 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3f6bee7-a66e-4cec-83d5-6c0796a73e22-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wj2bx\" (UID: \"d3f6bee7-a66e-4cec-83d5-6c0796a73e22\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.690395 4793 request.go:700] Waited for 1.863407367s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.691786 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w2cd\" (UniqueName: \"kubernetes.io/projected/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-kube-api-access-4w2cd\") pod \"console-f9d7485db-kknzc\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:54 crc kubenswrapper[4793]: W0130 13:45:54.693837 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e9a73cf_3a15_4a72_9d5a_2cdd62318ea2.slice/crio-bdda1ecc421f8141df41616046f3d3f188f116ac7ed8f2994e348ec543fa07b3 WatchSource:0}: Error finding container bdda1ecc421f8141df41616046f3d3f188f116ac7ed8f2994e348ec543fa07b3: Status 404 returned error can't find the container with id bdda1ecc421f8141df41616046f3d3f188f116ac7ed8f2994e348ec543fa07b3 Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.710693 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnpxc\" (UniqueName: \"kubernetes.io/projected/46caba5b-4a87-480a-ac56-437102a31802-kube-api-access-lnpxc\") pod \"openshift-controller-manager-operator-756b6f6bc6-dw8jz\" (UID: \"46caba5b-4a87-480a-ac56-437102a31802\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.711383 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.719314 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.726241 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.728279 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/daa9599a-67b0-421e-8add-0656c0b98af2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.768841 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2pcw\" (UniqueName: \"kubernetes.io/projected/daa9599a-67b0-421e-8add-0656c0b98af2-kube-api-access-p2pcw\") pod \"ingress-operator-5b745b69d9-v476x\" (UID: \"daa9599a-67b0-421e-8add-0656c0b98af2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.786290 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kpjf\" (UniqueName: \"kubernetes.io/projected/e2a53aac-c9f7-465c-821b-cd62aa893d13-kube-api-access-9kpjf\") pod \"openshift-apiserver-operator-796bbdcf4f-9tb5z\" (UID: \"e2a53aac-c9f7-465c-821b-cd62aa893d13\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.788908 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf6fx\" (UniqueName: \"kubernetes.io/projected/0e50ecc2-1bbc-4e8c-8d46-edf8369095bc-kube-api-access-jf6fx\") pod \"router-default-5444994796-2lv2p\" (UID: \"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc\") " pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.792399 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.808586 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfgsg\" (UniqueName: \"kubernetes.io/projected/7c31ba39-5ef3-458b-89c1-eb43adfa3d7f-kube-api-access-wfgsg\") pod \"machine-approver-56656f9798-h5zfs\" (UID: \"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.810223 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.820531 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.824611 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.826214 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlvm8\" (UniqueName: \"kubernetes.io/projected/c44b9aaf-de3a-48a8-8760-5553255887ac-kube-api-access-jlvm8\") pod \"migrator-59844c95c7-q5442\" (UID: \"c44b9aaf-de3a-48a8-8760-5553255887ac\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.866805 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vhgb\" (UniqueName: \"kubernetes.io/projected/4a64abca-3318-4208-8edb-1474e0ba5f2f-kube-api-access-4vhgb\") pod \"oauth-openshift-558db77b4-s2mcj\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.875753 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmq77\" (UniqueName: \"kubernetes.io/projected/268883cf-a27e-4b69-bd41-18f0a35c3e6a-kube-api-access-xmq77\") pod \"controller-manager-879f6c89f-qsdzw\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.889037 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckwgj\" (UniqueName: \"kubernetes.io/projected/4ce07df7-af19-4334-b704-818df47958a1-kube-api-access-ckwgj\") pod \"openshift-config-operator-7777fb866f-899ps\" (UID: \"4ce07df7-af19-4334-b704-818df47958a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.907734 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd7922e2-3b17-4212-94b3-2405e20841ad-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.927490 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51800ff9-fe19-4a50-a272-be1de629ec82-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jb6f2\" (UID: \"51800ff9-fe19-4a50-a272-be1de629ec82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.930880 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff"] Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.950958 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1faa169d-53de-456e-8f99-f93dc2772719-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m754g\" (UID: \"1faa169d-53de-456e-8f99-f93dc2772719\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.961594 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.973585 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.976238 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlclv\" (UniqueName: \"kubernetes.io/projected/cd7922e2-3b17-4212-94b3-2405e20841ad-kube-api-access-wlclv\") pod \"cluster-image-registry-operator-dc59b4c8b-cm7mm\" (UID: \"cd7922e2-3b17-4212-94b3-2405e20841ad\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.996754 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 13:45:54 crc kubenswrapper[4793]: I0130 13:45:54.997065 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.007762 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.015660 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.027808 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.031918 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zrj8g"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.044264 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.046769 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.047167 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.055061 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.057785 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.060518 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.069444 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.074947 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.077473 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.097959 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.114706 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5l76j"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.149240 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174475 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/afa7929d-37a8-4fa2-9733-158cab1c40ec-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174502 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-bound-sa-token\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174521 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghn8d\" (UniqueName: \"kubernetes.io/projected/afa7929d-37a8-4fa2-9733-158cab1c40ec-kube-api-access-ghn8d\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174550 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/afa7929d-37a8-4fa2-9733-158cab1c40ec-images\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174569 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6e18cea-cac6-4eb8-b8de-2885fcf57497-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174588 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxzl2\" (UniqueName: \"kubernetes.io/projected/9fca2cfc-e4a0-42a0-9815-424987b55fd5-kube-api-access-pxzl2\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174617 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-tls\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174653 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-certificates\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174668 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa7929d-37a8-4fa2-9733-158cab1c40ec-config\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174693 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fca2cfc-e4a0-42a0-9815-424987b55fd5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174714 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-trusted-ca\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174733 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fca2cfc-e4a0-42a0-9815-424987b55fd5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174761 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174779 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6e18cea-cac6-4eb8-b8de-2885fcf57497-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.174795 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg2l5\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-kube-api-access-xg2l5\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.175828 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:55.67581835 +0000 UTC m=+166.377166841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: W0130 13:45:55.196296 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3806824c_28d3_47d4_b33f_01d9ab1239b8.slice/crio-ebeef65cf977c550f990b47bea40a369de75d49849bacece5940da4022148b02 WatchSource:0}: Error finding container ebeef65cf977c550f990b47bea40a369de75d49849bacece5940da4022148b02: Status 404 returned error can't find the container with id ebeef65cf977c550f990b47bea40a369de75d49849bacece5940da4022148b02 Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.223600 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.233138 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-65rgb" event={"ID":"e8aacb4a-f044-427a-b5ef-1d4126b98a6a","Type":"ContainerStarted","Data":"35104949249c3b797524bbbce708846543e38271ca4497bb48cec0610fbb4e5d"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.233205 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-65rgb" event={"ID":"e8aacb4a-f044-427a-b5ef-1d4126b98a6a","Type":"ContainerStarted","Data":"f71e939b5f36fce3e31b153d25f807eb7ec599b25bcf56541b647f3d1836e225"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.233864 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.235965 4793 patch_prober.go:28] interesting pod/console-operator-58897d9998-65rgb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.236006 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-65rgb" podUID="e8aacb4a-f044-427a-b5ef-1d4126b98a6a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.237677 4793 generic.go:334] "Generic (PLEG): container finished" podID="ea703d52-c081-418f-9343-61b68296314f" containerID="b3d5fccd5ce91cfa10f3aa4efa67a5dac5276d91c9b96650348862da038b3fad" exitCode=0 Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.237724 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" event={"ID":"ea703d52-c081-418f-9343-61b68296314f","Type":"ContainerDied","Data":"b3d5fccd5ce91cfa10f3aa4efa67a5dac5276d91c9b96650348862da038b3fad"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.242360 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" event={"ID":"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f","Type":"ContainerStarted","Data":"75caf8f25739686e2addb206cbde5492323c176d0cd4b36001b212b0c13ae756"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.246138 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" event={"ID":"3401bbdc-090b-402b-bf7b-a4a823182946","Type":"ContainerStarted","Data":"e77637d9122e133a6d2b2a42071821a75959ea573de24e01ab364993d4834504"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.246171 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" event={"ID":"3401bbdc-090b-402b-bf7b-a4a823182946","Type":"ContainerStarted","Data":"f3c0936a73e62807c0b874758a1c2db154a809b4096905dd8d5cb0c8738657fe"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.251181 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" event={"ID":"7dbc78d6-c879-4284-89b6-169d359839bf","Type":"ContainerStarted","Data":"9fce52fd4df200cd47b1ec015ae5f6e141a21db87359d7fd523e3ede8826e2ec"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.251210 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" event={"ID":"7dbc78d6-c879-4284-89b6-169d359839bf","Type":"ContainerStarted","Data":"029de3b1f28797b6cbbf4b7545deaf6781dd6b3401588287ec9fa2ad62c13962"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.251890 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.253789 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j5zhl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.253817 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.263946 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sd6hs" event={"ID":"6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2","Type":"ContainerStarted","Data":"f99529531b1a090c1e9f4ecee92d599c59303bd9a673012fd1cacb5057890818"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.263993 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sd6hs" event={"ID":"6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2","Type":"ContainerStarted","Data":"bdda1ecc421f8141df41616046f3d3f188f116ac7ed8f2994e348ec543fa07b3"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.265944 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" event={"ID":"99444dfd-71c4-4d2d-a94a-cecc7a740423","Type":"ContainerStarted","Data":"1bb3533f2f821097a35d4b358c1f72ed9ac789a3e4a473ad96c9b00830444be3"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.265965 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" event={"ID":"99444dfd-71c4-4d2d-a94a-cecc7a740423","Type":"ContainerStarted","Data":"58afd350517e81ce61a630548fc3831c772035b08a3aa070c55c46f08a0f8f91"} Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.277007 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.277314 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:55.777270355 +0000 UTC m=+166.478618846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.282684 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg2l5\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-kube-api-access-xg2l5\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.282751 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6db0dcc6-874c-40f9-a0b7-309149c78f48-config-volume\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.282782 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff54l\" (UniqueName: \"kubernetes.io/projected/9932b998-297e-47a4-a005-ccfca0665793-kube-api-access-ff54l\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.282807 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/26050dc1-aaba-45b6-8633-015f5e4261f0-metrics-tls\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.282865 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjgtm\" (UniqueName: \"kubernetes.io/projected/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-kube-api-access-vjgtm\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.282907 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wq7p\" (UniqueName: \"kubernetes.io/projected/ff810089-efad-424c-8537-f528803767c7-kube-api-access-2wq7p\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.282956 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45cf2\" (UniqueName: \"kubernetes.io/projected/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-kube-api-access-45cf2\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283030 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f30f4833-f565-4225-a45a-02c0f592c37b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r8b5w\" (UID: \"f30f4833-f565-4225-a45a-02c0f592c37b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283077 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jbxz\" (UniqueName: \"kubernetes.io/projected/10c05bcf-ffb2-4175-b323-067804ea3391-kube-api-access-7jbxz\") pod \"control-plane-machine-set-operator-78cbb6b69f-vqxml\" (UID: \"10c05bcf-ffb2-4175-b323-067804ea3391\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283120 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/afa7929d-37a8-4fa2-9733-158cab1c40ec-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283163 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-bound-sa-token\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283226 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghn8d\" (UniqueName: \"kubernetes.io/projected/afa7929d-37a8-4fa2-9733-158cab1c40ec-kube-api-access-ghn8d\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283261 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-webhook-cert\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283357 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-plugins-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283402 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9jsh\" (UniqueName: \"kubernetes.io/projected/ee3323fa-00f7-45ee-8d54-040e40398b5a-kube-api-access-g9jsh\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283467 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-tmpfs\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283508 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee3323fa-00f7-45ee-8d54-040e40398b5a-profile-collector-cert\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283533 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/afa7929d-37a8-4fa2-9733-158cab1c40ec-images\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283560 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283580 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283608 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6e18cea-cac6-4eb8-b8de-2885fcf57497-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283667 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxzl2\" (UniqueName: \"kubernetes.io/projected/9fca2cfc-e4a0-42a0-9815-424987b55fd5-kube-api-access-pxzl2\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283716 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ff810089-efad-424c-8537-f528803767c7-certs\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283751 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gj7j\" (UniqueName: \"kubernetes.io/projected/f30f4833-f565-4225-a45a-02c0f592c37b-kube-api-access-8gj7j\") pod \"package-server-manager-789f6589d5-r8b5w\" (UID: \"f30f4833-f565-4225-a45a-02c0f592c37b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283779 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-srv-cert\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283800 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-proxy-tls\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283856 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee3323fa-00f7-45ee-8d54-040e40398b5a-srv-cert\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283898 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-tls\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.283962 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-csi-data-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284015 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6db0dcc6-874c-40f9-a0b7-309149c78f48-secret-volume\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284032 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-proxy-tls\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284103 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-certificates\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284122 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26050dc1-aaba-45b6-8633-015f5e4261f0-config-volume\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284153 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa7929d-37a8-4fa2-9733-158cab1c40ec-config\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284170 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-747rb\" (UniqueName: \"kubernetes.io/projected/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-kube-api-access-747rb\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284203 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-socket-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284233 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ff810089-efad-424c-8537-f528803767c7-node-bootstrap-token\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284251 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh6ft\" (UniqueName: \"kubernetes.io/projected/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-kube-api-access-sh6ft\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284282 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284299 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58e495d5-6c64-4452-b05c-36e055a100b4-cert\") pod \"ingress-canary-4pnff\" (UID: \"58e495d5-6c64-4452-b05c-36e055a100b4\") " pod="openshift-ingress-canary/ingress-canary-4pnff" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284344 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khqhk\" (UniqueName: \"kubernetes.io/projected/26050dc1-aaba-45b6-8633-015f5e4261f0-kube-api-access-khqhk\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284362 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfwd2\" (UniqueName: \"kubernetes.io/projected/58e495d5-6c64-4452-b05c-36e055a100b4-kube-api-access-nfwd2\") pod \"ingress-canary-4pnff\" (UID: \"58e495d5-6c64-4452-b05c-36e055a100b4\") " pod="openshift-ingress-canary/ingress-canary-4pnff" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284387 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fca2cfc-e4a0-42a0-9815-424987b55fd5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284408 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b6283f5-d30b-483e-8772-456b0109a14b-config\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284428 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284450 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b6283f5-d30b-483e-8772-456b0109a14b-serving-cert\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284480 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-apiservice-cert\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284538 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn47d\" (UniqueName: \"kubernetes.io/projected/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-kube-api-access-tn47d\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284600 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-trusted-ca\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284638 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-mountpoint-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284659 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-images\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284690 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-registration-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284710 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s85cn\" (UniqueName: \"kubernetes.io/projected/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-kube-api-access-s85cn\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284759 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fca2cfc-e4a0-42a0-9815-424987b55fd5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284810 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9932b998-297e-47a4-a005-ccfca0665793-signing-cabundle\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284838 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qxpm\" (UniqueName: \"kubernetes.io/projected/6db0dcc6-874c-40f9-a0b7-309149c78f48-kube-api-access-2qxpm\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284868 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vznc\" (UniqueName: \"kubernetes.io/projected/8b6283f5-d30b-483e-8772-456b0109a14b-kube-api-access-5vznc\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284884 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9932b998-297e-47a4-a005-ccfca0665793-signing-key\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284904 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/10c05bcf-ffb2-4175-b323-067804ea3391-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vqxml\" (UID: \"10c05bcf-ffb2-4175-b323-067804ea3391\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284940 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.284983 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6e18cea-cac6-4eb8-b8de-2885fcf57497-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.285004 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.290953 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fca2cfc-e4a0-42a0-9815-424987b55fd5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.293005 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fca2cfc-e4a0-42a0-9815-424987b55fd5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.294200 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/afa7929d-37a8-4fa2-9733-158cab1c40ec-images\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.297279 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:55.79726132 +0000 UTC m=+166.498609891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.297749 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6e18cea-cac6-4eb8-b8de-2885fcf57497-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.299716 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/afa7929d-37a8-4fa2-9733-158cab1c40ec-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.301315 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-trusted-ca\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.301736 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-certificates\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.301786 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa7929d-37a8-4fa2-9733-158cab1c40ec-config\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.302322 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6e18cea-cac6-4eb8-b8de-2885fcf57497-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.323015 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-tls\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.335459 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg2l5\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-kube-api-access-xg2l5\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.354077 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kknzc"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.354992 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxzl2\" (UniqueName: \"kubernetes.io/projected/9fca2cfc-e4a0-42a0-9815-424987b55fd5-kube-api-access-pxzl2\") pod \"kube-storage-version-migrator-operator-b67b599dd-tkbqv\" (UID: \"9fca2cfc-e4a0-42a0-9815-424987b55fd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.369319 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qsdzw"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.374602 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-bound-sa-token\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.386515 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.386767 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-socket-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.386826 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ff810089-efad-424c-8537-f528803767c7-node-bootstrap-token\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.386853 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh6ft\" (UniqueName: \"kubernetes.io/projected/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-kube-api-access-sh6ft\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.386874 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.386916 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58e495d5-6c64-4452-b05c-36e055a100b4-cert\") pod \"ingress-canary-4pnff\" (UID: \"58e495d5-6c64-4452-b05c-36e055a100b4\") " pod="openshift-ingress-canary/ingress-canary-4pnff" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.386939 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khqhk\" (UniqueName: \"kubernetes.io/projected/26050dc1-aaba-45b6-8633-015f5e4261f0-kube-api-access-khqhk\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.386960 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b6283f5-d30b-483e-8772-456b0109a14b-config\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387001 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387022 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfwd2\" (UniqueName: \"kubernetes.io/projected/58e495d5-6c64-4452-b05c-36e055a100b4-kube-api-access-nfwd2\") pod \"ingress-canary-4pnff\" (UID: \"58e495d5-6c64-4452-b05c-36e055a100b4\") " pod="openshift-ingress-canary/ingress-canary-4pnff" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387042 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b6283f5-d30b-483e-8772-456b0109a14b-serving-cert\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387092 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-apiservice-cert\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387149 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn47d\" (UniqueName: \"kubernetes.io/projected/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-kube-api-access-tn47d\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387189 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-mountpoint-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387233 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-registration-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387254 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s85cn\" (UniqueName: \"kubernetes.io/projected/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-kube-api-access-s85cn\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387275 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-images\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387355 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9932b998-297e-47a4-a005-ccfca0665793-signing-cabundle\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387396 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qxpm\" (UniqueName: \"kubernetes.io/projected/6db0dcc6-874c-40f9-a0b7-309149c78f48-kube-api-access-2qxpm\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387425 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vznc\" (UniqueName: \"kubernetes.io/projected/8b6283f5-d30b-483e-8772-456b0109a14b-kube-api-access-5vznc\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387470 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9932b998-297e-47a4-a005-ccfca0665793-signing-key\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387497 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/10c05bcf-ffb2-4175-b323-067804ea3391-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vqxml\" (UID: \"10c05bcf-ffb2-4175-b323-067804ea3391\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387554 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387579 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6db0dcc6-874c-40f9-a0b7-309149c78f48-config-volume\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387601 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff54l\" (UniqueName: \"kubernetes.io/projected/9932b998-297e-47a4-a005-ccfca0665793-kube-api-access-ff54l\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387640 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/26050dc1-aaba-45b6-8633-015f5e4261f0-metrics-tls\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387663 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjgtm\" (UniqueName: \"kubernetes.io/projected/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-kube-api-access-vjgtm\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387725 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wq7p\" (UniqueName: \"kubernetes.io/projected/ff810089-efad-424c-8537-f528803767c7-kube-api-access-2wq7p\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387762 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45cf2\" (UniqueName: \"kubernetes.io/projected/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-kube-api-access-45cf2\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387807 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f30f4833-f565-4225-a45a-02c0f592c37b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r8b5w\" (UID: \"f30f4833-f565-4225-a45a-02c0f592c37b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.387967 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jbxz\" (UniqueName: \"kubernetes.io/projected/10c05bcf-ffb2-4175-b323-067804ea3391-kube-api-access-7jbxz\") pod \"control-plane-machine-set-operator-78cbb6b69f-vqxml\" (UID: \"10c05bcf-ffb2-4175-b323-067804ea3391\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388003 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-webhook-cert\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388151 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-plugins-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388239 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9jsh\" (UniqueName: \"kubernetes.io/projected/ee3323fa-00f7-45ee-8d54-040e40398b5a-kube-api-access-g9jsh\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388286 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-tmpfs\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388355 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee3323fa-00f7-45ee-8d54-040e40398b5a-profile-collector-cert\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388402 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388425 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388521 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gj7j\" (UniqueName: \"kubernetes.io/projected/f30f4833-f565-4225-a45a-02c0f592c37b-kube-api-access-8gj7j\") pod \"package-server-manager-789f6589d5-r8b5w\" (UID: \"f30f4833-f565-4225-a45a-02c0f592c37b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388564 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-srv-cert\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388584 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ff810089-efad-424c-8537-f528803767c7-certs\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388639 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-proxy-tls\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388674 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee3323fa-00f7-45ee-8d54-040e40398b5a-srv-cert\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388755 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6db0dcc6-874c-40f9-a0b7-309149c78f48-secret-volume\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388798 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-csi-data-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388831 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26050dc1-aaba-45b6-8633-015f5e4261f0-config-volume\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388873 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-proxy-tls\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.388897 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-747rb\" (UniqueName: \"kubernetes.io/projected/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-kube-api-access-747rb\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.390353 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-plugins-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.390424 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-mountpoint-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.390473 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-registration-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.391238 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-tmpfs\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.391254 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghn8d\" (UniqueName: \"kubernetes.io/projected/afa7929d-37a8-4fa2-9733-158cab1c40ec-kube-api-access-ghn8d\") pod \"machine-api-operator-5694c8668f-56g7n\" (UID: \"afa7929d-37a8-4fa2-9733-158cab1c40ec\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.391507 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:55.891470585 +0000 UTC m=+166.592819116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.391679 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-socket-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.392365 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-images\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.404992 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.405195 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.405763 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9932b998-297e-47a4-a005-ccfca0665793-signing-cabundle\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.406187 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-webhook-cert\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.407287 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee3323fa-00f7-45ee-8d54-040e40398b5a-srv-cert\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.408950 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b6283f5-d30b-483e-8772-456b0109a14b-config\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.409417 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.409407 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-csi-data-dir\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.410328 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.410603 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ff810089-efad-424c-8537-f528803767c7-node-bootstrap-token\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.412526 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee3323fa-00f7-45ee-8d54-040e40398b5a-profile-collector-cert\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.412695 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-apiservice-cert\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.416557 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6db0dcc6-874c-40f9-a0b7-309149c78f48-config-volume\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.418626 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f30f4833-f565-4225-a45a-02c0f592c37b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r8b5w\" (UID: \"f30f4833-f565-4225-a45a-02c0f592c37b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.419000 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/26050dc1-aaba-45b6-8633-015f5e4261f0-metrics-tls\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.422655 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9932b998-297e-47a4-a005-ccfca0665793-signing-key\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.425493 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.425709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.426002 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/10c05bcf-ffb2-4175-b323-067804ea3391-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vqxml\" (UID: \"10c05bcf-ffb2-4175-b323-067804ea3391\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.426760 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.426964 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ff810089-efad-424c-8537-f528803767c7-certs\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.432993 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-proxy-tls\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.434800 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-747rb\" (UniqueName: \"kubernetes.io/projected/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-kube-api-access-747rb\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.435184 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26050dc1-aaba-45b6-8633-015f5e4261f0-config-volume\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.440629 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-proxy-tls\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.442159 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8-srv-cert\") pod \"olm-operator-6b444d44fb-nb75n\" (UID: \"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.442443 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6db0dcc6-874c-40f9-a0b7-309149c78f48-secret-volume\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.444981 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58e495d5-6c64-4452-b05c-36e055a100b4-cert\") pod \"ingress-canary-4pnff\" (UID: \"58e495d5-6c64-4452-b05c-36e055a100b4\") " pod="openshift-ingress-canary/ingress-canary-4pnff" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.447361 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b6283f5-d30b-483e-8772-456b0109a14b-serving-cert\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.454579 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn47d\" (UniqueName: \"kubernetes.io/projected/6e8eea51-5cd4-4a66-9d0e-fc9fb115807e-kube-api-access-tn47d\") pod \"csi-hostpathplugin-gsr67\" (UID: \"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e\") " pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.476788 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s85cn\" (UniqueName: \"kubernetes.io/projected/25ebc563-7e8a-4d8f-ace8-2d6c767816cf-kube-api-access-s85cn\") pod \"packageserver-d55dfcdfc-fbdzm\" (UID: \"25ebc563-7e8a-4d8f-ace8-2d6c767816cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.480377 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.481555 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.486421 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.489258 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v476x"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.489916 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.490210 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:55.990199148 +0000 UTC m=+166.691547639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.490581 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.499506 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.508193 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjgtm\" (UniqueName: \"kubernetes.io/projected/ce6b8f06-a708-4fdf-bbf3-47648cd005ea-kube-api-access-vjgtm\") pod \"machine-config-controller-84d6567774-4dv9l\" (UID: \"ce6b8f06-a708-4fdf-bbf3-47648cd005ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.511524 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9jsh\" (UniqueName: \"kubernetes.io/projected/ee3323fa-00f7-45ee-8d54-040e40398b5a-kube-api-access-g9jsh\") pod \"catalog-operator-68c6474976-mgv7t\" (UID: \"ee3323fa-00f7-45ee-8d54-040e40398b5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.518777 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mnzcq"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.537861 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wq7p\" (UniqueName: \"kubernetes.io/projected/ff810089-efad-424c-8537-f528803767c7-kube-api-access-2wq7p\") pod \"machine-config-server-988dg\" (UID: \"ff810089-efad-424c-8537-f528803767c7\") " pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.564567 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45cf2\" (UniqueName: \"kubernetes.io/projected/1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48-kube-api-access-45cf2\") pod \"machine-config-operator-74547568cd-lt7rr\" (UID: \"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.570654 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh6ft\" (UniqueName: \"kubernetes.io/projected/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-kube-api-access-sh6ft\") pod \"marketplace-operator-79b997595-zd5lq\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.591086 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jbxz\" (UniqueName: \"kubernetes.io/projected/10c05bcf-ffb2-4175-b323-067804ea3391-kube-api-access-7jbxz\") pod \"control-plane-machine-set-operator-78cbb6b69f-vqxml\" (UID: \"10c05bcf-ffb2-4175-b323-067804ea3391\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.591515 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.591871 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.091854358 +0000 UTC m=+166.793202849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.600324 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gsr67" Jan 30 13:45:55 crc kubenswrapper[4793]: W0130 13:45:55.607108 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f6bee7_a66e_4cec_83d5_6c0796a73e22.slice/crio-24bd55b7779751400a79ee717b96ea544f012a65f6b30cdf0b0ec04c1bc00a8c WatchSource:0}: Error finding container 24bd55b7779751400a79ee717b96ea544f012a65f6b30cdf0b0ec04c1bc00a8c: Status 404 returned error can't find the container with id 24bd55b7779751400a79ee717b96ea544f012a65f6b30cdf0b0ec04c1bc00a8c Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.607317 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.615761 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-988dg" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.628237 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khqhk\" (UniqueName: \"kubernetes.io/projected/26050dc1-aaba-45b6-8633-015f5e4261f0-kube-api-access-khqhk\") pod \"dns-default-2lf59\" (UID: \"26050dc1-aaba-45b6-8633-015f5e4261f0\") " pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.632358 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2lf59" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.660207 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gj7j\" (UniqueName: \"kubernetes.io/projected/f30f4833-f565-4225-a45a-02c0f592c37b-kube-api-access-8gj7j\") pod \"package-server-manager-789f6589d5-r8b5w\" (UID: \"f30f4833-f565-4225-a45a-02c0f592c37b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.697806 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff54l\" (UniqueName: \"kubernetes.io/projected/9932b998-297e-47a4-a005-ccfca0665793-kube-api-access-ff54l\") pod \"service-ca-9c57cc56f-n9v6k\" (UID: \"9932b998-297e-47a4-a005-ccfca0665793\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.698457 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.698993 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.198977022 +0000 UTC m=+166.900325523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.703970 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.706892 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfwd2\" (UniqueName: \"kubernetes.io/projected/58e495d5-6c64-4452-b05c-36e055a100b4-kube-api-access-nfwd2\") pod \"ingress-canary-4pnff\" (UID: \"58e495d5-6c64-4452-b05c-36e055a100b4\") " pod="openshift-ingress-canary/ingress-canary-4pnff" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.720486 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.744723 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.746255 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.747464 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.750519 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vznc\" (UniqueName: \"kubernetes.io/projected/8b6283f5-d30b-483e-8772-456b0109a14b-kube-api-access-5vznc\") pod \"service-ca-operator-777779d784-wzj2m\" (UID: \"8b6283f5-d30b-483e-8772-456b0109a14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.752358 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qxpm\" (UniqueName: \"kubernetes.io/projected/6db0dcc6-874c-40f9-a0b7-309149c78f48-kube-api-access-2qxpm\") pod \"collect-profiles-29496345-xbqs7\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.757720 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.760369 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2mcj"] Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.766219 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.800150 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.800769 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.300752775 +0000 UTC m=+167.002101256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.803413 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.828365 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" Jan 30 13:45:55 crc kubenswrapper[4793]: W0130 13:45:55.870813 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2a53aac_c9f7_465c_821b_cd62aa893d13.slice/crio-ceaca64468097ed06b34d5285968da73a0c12ecd8f2de0d6a9b136046beec28e WatchSource:0}: Error finding container ceaca64468097ed06b34d5285968da73a0c12ecd8f2de0d6a9b136046beec28e: Status 404 returned error can't find the container with id ceaca64468097ed06b34d5285968da73a0c12ecd8f2de0d6a9b136046beec28e Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.901547 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:55 crc kubenswrapper[4793]: E0130 13:45:55.902244 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.402229681 +0000 UTC m=+167.103578172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:55 crc kubenswrapper[4793]: I0130 13:45:55.926859 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4pnff" Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:55.998465 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.002704 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.002975 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.502960807 +0000 UTC m=+167.204309298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.050403 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.070645 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.098698 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-899ps"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.104287 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.104637 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.604623647 +0000 UTC m=+167.305972138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.115207 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-56g7n"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.221423 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.221758 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.721741934 +0000 UTC m=+167.423090425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.223596 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gsr67"] Jan 30 13:45:56 crc kubenswrapper[4793]: W0130 13:45:56.271813 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ce07df7_af19_4334_b704_818df47958a1.slice/crio-40d821df48bde25c13419212f33e6d45e1f09a2976143a476e372ddcb7de8977 WatchSource:0}: Error finding container 40d821df48bde25c13419212f33e6d45e1f09a2976143a476e372ddcb7de8977: Status 404 returned error can't find the container with id 40d821df48bde25c13419212f33e6d45e1f09a2976143a476e372ddcb7de8977 Jan 30 13:45:56 crc kubenswrapper[4793]: W0130 13:45:56.290543 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafa7929d_37a8_4fa2_9733_158cab1c40ec.slice/crio-0675174f602274cec64270e535350cddda8ab1136c88dae78a81e3e89a4f7d9f WatchSource:0}: Error finding container 0675174f602274cec64270e535350cddda8ab1136c88dae78a81e3e89a4f7d9f: Status 404 returned error can't find the container with id 0675174f602274cec64270e535350cddda8ab1136c88dae78a81e3e89a4f7d9f Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.292208 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" event={"ID":"daa9599a-67b0-421e-8add-0656c0b98af2","Type":"ContainerStarted","Data":"49a180b06b2102f1f0bdd289dc2e1b6c881d599af48ea9adf0dbf94bab3b6d0e"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.293107 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" event={"ID":"3806824c-28d3-47d4-b33f-01d9ab1239b8","Type":"ContainerStarted","Data":"ebeef65cf977c550f990b47bea40a369de75d49849bacece5940da4022148b02"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.304256 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" event={"ID":"e2a53aac-c9f7-465c-821b-cd62aa893d13","Type":"ContainerStarted","Data":"ceaca64468097ed06b34d5285968da73a0c12ecd8f2de0d6a9b136046beec28e"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.319379 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2lv2p" event={"ID":"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc","Type":"ContainerStarted","Data":"9a3a1e27832473618b66e6b2c1055e6a48e70a3eca61ad8b9f60c802f1d3f22a"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.323956 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.325361 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n"] Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.325962 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.825944931 +0000 UTC m=+167.527293422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.328169 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" event={"ID":"25ebc563-7e8a-4d8f-ace8-2d6c767816cf","Type":"ContainerStarted","Data":"ce7275dc9b0505faf357fda3a2560f041a27b41cc92b3214055ec96cf24dcc9c"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.329865 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" event={"ID":"c44b9aaf-de3a-48a8-8760-5553255887ac","Type":"ContainerStarted","Data":"9cede41913997b56f9e43a0dc2bab8c620ba35a3fb3110774d665a4cb117d065"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.332836 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" event={"ID":"cd7922e2-3b17-4212-94b3-2405e20841ad","Type":"ContainerStarted","Data":"09db831e86d1c450c70165a2b7437425ff325654e30625f6159cd607dbf8b13a"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.352344 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" event={"ID":"7fc1ca51-0362-4492-ba07-8c5413c39deb","Type":"ContainerStarted","Data":"6d0752410ba98c2bc2f1a92bea73229e89fabbad72bdd349cf6974dd56b8c7a1"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.352388 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" event={"ID":"7fc1ca51-0362-4492-ba07-8c5413c39deb","Type":"ContainerStarted","Data":"6faa549e755518bfd5dec01dc6e80a76a8ba8e2e393bcad75ee67b04203d8b8a"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.389267 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" event={"ID":"4e62edf8-f827-4fa6-8b40-563c821707ae","Type":"ContainerStarted","Data":"4e41b1a1f4f457fc0474caf8e5ca919e41d2a622c6a06709a5f2df3908f9d18e"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.438975 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.439424 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:56.939399371 +0000 UTC m=+167.640747872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.461820 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" event={"ID":"b72b54ef-6699-4091-b47d-f05f7c85adb2","Type":"ContainerStarted","Data":"52a3743f45ced1808d08a2400b6b73d60ac30fc4f23792f8bdb542aa51781cf3"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.479430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kknzc" event={"ID":"69c74b2a-9812-42cf-90b7-b431e2b5f5cf","Type":"ContainerStarted","Data":"333d1fe50b85de201d8359b376659ea922dde6cd7dc921f7d1df2397e061732e"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.491986 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.504431 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" event={"ID":"4a64abca-3318-4208-8edb-1474e0ba5f2f","Type":"ContainerStarted","Data":"0e39fca869bb577560ccf5c5e0fd7294441d98f691e7a0b7c896fff632efcbeb"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.539131 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" event={"ID":"46caba5b-4a87-480a-ac56-437102a31802","Type":"ContainerStarted","Data":"aacb136b6e0299cc36715a06c8bd3491ac3bb3d3c5b7e39583453f7fc41f4291"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.539180 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" event={"ID":"46caba5b-4a87-480a-ac56-437102a31802","Type":"ContainerStarted","Data":"9645e79daceb9f44d806c214d1518c565f7dd080ef9ce89c8b3afaea21bee0f2"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.540332 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.540692 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.040679512 +0000 UTC m=+167.742028003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.547505 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" event={"ID":"d2aa0043-dc77-41ca-a95f-2d119ed48053","Type":"ContainerStarted","Data":"7394f9ee0cd656a9ab0c003174a7397ecfbee9a1cc9b73ba9a34857dbcd6b515"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.554407 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" event={"ID":"1faa169d-53de-456e-8f99-f93dc2772719","Type":"ContainerStarted","Data":"29869ec4f3416801c9952e4cc002e2be8b2ae1a57d0c81beaf18a751ddccf77f"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.591497 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" event={"ID":"268883cf-a27e-4b69-bd41-18f0a35c3e6a","Type":"ContainerStarted","Data":"86ef773c0816c089c75665928f1abef5c6f766f515abfa5bb1d78513d4527722"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.641217 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.643431 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.14341484 +0000 UTC m=+167.844763331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.686616 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" event={"ID":"d3f6bee7-a66e-4cec-83d5-6c0796a73e22","Type":"ContainerStarted","Data":"24bd55b7779751400a79ee717b96ea544f012a65f6b30cdf0b0ec04c1bc00a8c"} Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.687958 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j5zhl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.688027 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.689814 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.689917 4793 patch_prober.go:28] interesting pod/console-operator-58897d9998-65rgb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.689952 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-65rgb" podUID="e8aacb4a-f044-427a-b5ef-1d4126b98a6a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.693248 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.696397 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.696450 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.742810 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.743164 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.24315411 +0000 UTC m=+167.944502601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.749127 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.809718 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.810985 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dw8jz" podStartSLOduration=144.810975962 podStartE2EDuration="2m24.810975962s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:56.809979285 +0000 UTC m=+167.511327786" watchObservedRunningTime="2026-01-30 13:45:56.810975962 +0000 UTC m=+167.512324453" Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.844218 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.846680 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.346658309 +0000 UTC m=+168.048006810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.895912 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml"] Jan 30 13:45:56 crc kubenswrapper[4793]: I0130 13:45:56.947359 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:56 crc kubenswrapper[4793]: E0130 13:45:56.947736 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.447724813 +0000 UTC m=+168.149073304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.063564 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.063965 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.563950496 +0000 UTC m=+168.265298987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.164961 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.165345 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.665328439 +0000 UTC m=+168.366676930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.182919 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7"] Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.262916 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podStartSLOduration=144.262896092 podStartE2EDuration="2m24.262896092s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:57.247459896 +0000 UTC m=+167.948808387" watchObservedRunningTime="2026-01-30 13:45:57.262896092 +0000 UTC m=+167.964244583" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.265942 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.266011 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.765991504 +0000 UTC m=+168.467340005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.270826 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.272523 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.772508944 +0000 UTC m=+168.473857435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.328925 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-65rgb" podStartSLOduration=145.328907606 podStartE2EDuration="2m25.328907606s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:57.296755142 +0000 UTC m=+167.998103633" watchObservedRunningTime="2026-01-30 13:45:57.328907606 +0000 UTC m=+168.030256097" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.329041 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-sd6hs" podStartSLOduration=145.32903664 podStartE2EDuration="2m25.32903664s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:57.324178652 +0000 UTC m=+168.025527143" watchObservedRunningTime="2026-01-30 13:45:57.32903664 +0000 UTC m=+168.030385131" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.345180 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t"] Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.372131 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.372579 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.872559123 +0000 UTC m=+168.573907614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.393684 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2lf59"] Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.426604 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n9v6k"] Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.473800 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.474139 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:57.97412577 +0000 UTC m=+168.675474261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.482439 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zd5lq"] Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.488941 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr"] Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.570020 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4pnff"] Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.574482 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.574938 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.074916758 +0000 UTC m=+168.776265269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.676023 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.676723 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.176695771 +0000 UTC m=+168.878044262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.695522 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" event={"ID":"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8","Type":"ContainerStarted","Data":"576470021ef659f30c7c3a2539e82fa8bd5c5b14ba15049a0ef55de4b5c75eb5"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.697430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" event={"ID":"f30f4833-f565-4225-a45a-02c0f592c37b","Type":"ContainerStarted","Data":"17e76deb92370243c06f7980b3c6816976961fdf27c9e6f1f2e65688869856a1"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.707656 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" event={"ID":"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f","Type":"ContainerStarted","Data":"09fa684b0d4ebc9391c068ae0df11f135365b0d6393d4dde12538b47a1507b7c"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.712433 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4pnff" event={"ID":"58e495d5-6c64-4452-b05c-36e055a100b4","Type":"ContainerStarted","Data":"3b2c6d50949137403ba9a8c10686eb0bfe0bd31f7cd7e10f0ba100ae385a864c"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.734266 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" event={"ID":"51800ff9-fe19-4a50-a272-be1de629ec82","Type":"ContainerStarted","Data":"9aa26c88fb9b9122494199b4740e048147441c39bd3ab54fbd6e660e38b23848"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.747016 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" event={"ID":"4ce07df7-af19-4334-b704-818df47958a1","Type":"ContainerStarted","Data":"40d821df48bde25c13419212f33e6d45e1f09a2976143a476e372ddcb7de8977"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.761778 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" event={"ID":"afa7929d-37a8-4fa2-9733-158cab1c40ec","Type":"ContainerStarted","Data":"0675174f602274cec64270e535350cddda8ab1136c88dae78a81e3e89a4f7d9f"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.766516 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerStarted","Data":"97c187117ac894b4f40744eaace0837c1dade5f185e1a06955e03936c650d6b8"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.775949 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-988dg" event={"ID":"ff810089-efad-424c-8537-f528803767c7","Type":"ContainerStarted","Data":"6aa789dccf5f4881b36dec0232e1b855b2419d40560b793aa9ac888036acc963"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.777167 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.777491 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.277476619 +0000 UTC m=+168.978825110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.780295 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" event={"ID":"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48","Type":"ContainerStarted","Data":"cff77a10780a1d452309df890646561da8cd096bed583158717ae7bce4c6c9da"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.782842 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" event={"ID":"cd7922e2-3b17-4212-94b3-2405e20841ad","Type":"ContainerStarted","Data":"b429809c4589815fb5f49b2c0edebebb65aa2ef40f8908286328904b0e16c6a2"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.804025 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2lf59" event={"ID":"26050dc1-aaba-45b6-8633-015f5e4261f0","Type":"ContainerStarted","Data":"6f005ffde2411968faec1332790f25d7456f670992f069a01455efacb21f1c00"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.809943 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cm7mm" podStartSLOduration=145.809919851 podStartE2EDuration="2m25.809919851s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:57.807039425 +0000 UTC m=+168.508387916" watchObservedRunningTime="2026-01-30 13:45:57.809919851 +0000 UTC m=+168.511268342" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.817425 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" event={"ID":"99444dfd-71c4-4d2d-a94a-cecc7a740423","Type":"ContainerStarted","Data":"537561bb010e6c29f93a468442e04f859e3635bd4e19d86b7fb14a93a6631955"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.860241 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gsr67" event={"ID":"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e","Type":"ContainerStarted","Data":"9f2e734e355637ec91981730e25d72b3875fcf74dfe3193d7d89e38ad49704e9"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.865450 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kknzc" event={"ID":"69c74b2a-9812-42cf-90b7-b431e2b5f5cf","Type":"ContainerStarted","Data":"b72e6d29d1b411597eb5d49883f3b670ed4875b2819be1937cc8b9bc5e0bb53d"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.874695 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-ztcbh" podStartSLOduration=145.868998102 podStartE2EDuration="2m25.868998102s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:57.868545201 +0000 UTC m=+168.569893692" watchObservedRunningTime="2026-01-30 13:45:57.868998102 +0000 UTC m=+168.570346593" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.878802 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xfcvw" event={"ID":"3401bbdc-090b-402b-bf7b-a4a823182946","Type":"ContainerStarted","Data":"310a18f020d53e38a65bd5e52c8e9b754a180dbefbf488b0becb0c8fde24d7f7"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.880390 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.881360 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.381317976 +0000 UTC m=+169.082666467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.890423 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" event={"ID":"ee3323fa-00f7-45ee-8d54-040e40398b5a","Type":"ContainerStarted","Data":"35d4b95595df385b4efd1e4ea98b44dea785181c369c968e66b34c3aa27fe080"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.898011 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" event={"ID":"8b6283f5-d30b-483e-8772-456b0109a14b","Type":"ContainerStarted","Data":"c7dfa153d75591386ab9ed60f87fdaeb19e42906dab9c09ea14bdec8f6d8578d"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.910451 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" event={"ID":"4e62edf8-f827-4fa6-8b40-563c821707ae","Type":"ContainerStarted","Data":"39dc0ea700dee749040077e6ae12d95b42f7940a721f04248f2b017c10a9072c"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.923531 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" event={"ID":"9932b998-297e-47a4-a005-ccfca0665793","Type":"ContainerStarted","Data":"a91d2292a307c8b48c622d4b089b8d52f7036cf5e527afa26facd534bcae767d"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.924520 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" event={"ID":"e2a53aac-c9f7-465c-821b-cd62aa893d13","Type":"ContainerStarted","Data":"c5e1c5268fdbd7f5b0565de84492659091599640e7128c63aebc0d9f546c8f2d"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.925890 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" event={"ID":"b72b54ef-6699-4091-b47d-f05f7c85adb2","Type":"ContainerStarted","Data":"045f03756eb7708f2de161fc2f810472beb65668a9ac2e931f843c27c0643ba0"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.926504 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" event={"ID":"9fca2cfc-e4a0-42a0-9815-424987b55fd5","Type":"ContainerStarted","Data":"a8ff1c2296915927c32523d7c5c12e80ac30c681bd831b6e8585353b74330057"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.928022 4793 generic.go:334] "Generic (PLEG): container finished" podID="d2aa0043-dc77-41ca-a95f-2d119ed48053" containerID="175d3fd8eab5742391ef64df6fe143201ebc2c0816979ab91adc7a4c8925613f" exitCode=0 Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.928113 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" event={"ID":"d2aa0043-dc77-41ca-a95f-2d119ed48053","Type":"ContainerDied","Data":"175d3fd8eab5742391ef64df6fe143201ebc2c0816979ab91adc7a4c8925613f"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.939211 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-kknzc" podStartSLOduration=145.939187086 podStartE2EDuration="2m25.939187086s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:57.938623472 +0000 UTC m=+168.639971973" watchObservedRunningTime="2026-01-30 13:45:57.939187086 +0000 UTC m=+168.640535577" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.965552 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" event={"ID":"268883cf-a27e-4b69-bd41-18f0a35c3e6a","Type":"ContainerStarted","Data":"d19f43efe0461581ea609f879abb2a31d725dd71966c84254d6bb05f0e18ea46"} Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.966142 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.968321 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.968361 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.970776 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xfcvw" podStartSLOduration=145.970760426 podStartE2EDuration="2m25.970760426s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:57.968745363 +0000 UTC m=+168.670093844" watchObservedRunningTime="2026-01-30 13:45:57.970760426 +0000 UTC m=+168.672108917" Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.981404 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:57 crc kubenswrapper[4793]: E0130 13:45:57.983300 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.483282755 +0000 UTC m=+169.184631246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:57 crc kubenswrapper[4793]: I0130 13:45:57.998587 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2lv2p" event={"ID":"0e50ecc2-1bbc-4e8c-8d46-edf8369095bc","Type":"ContainerStarted","Data":"a1bf3ad39f1b83e609823551975eb328f953eab1151ca8aadf29efd0d688a8d7"} Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.008774 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" event={"ID":"10c05bcf-ffb2-4175-b323-067804ea3391","Type":"ContainerStarted","Data":"2756eee741a154fa1aa7b08871d9983b24c6902d02d5329f07b41386b8b427b1"} Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.009790 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" event={"ID":"6db0dcc6-874c-40f9-a0b7-309149c78f48","Type":"ContainerStarted","Data":"02184320f6531b0c82ba4d167218eef7190463e44618fd9bd7006fada9858678"} Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.010995 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" event={"ID":"ea703d52-c081-418f-9343-61b68296314f","Type":"ContainerStarted","Data":"d4d4fa8a5717a04d957f305331300580ade8f686e881920c35d0ae4b21426604"} Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.020186 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" event={"ID":"daa9599a-67b0-421e-8add-0656c0b98af2","Type":"ContainerStarted","Data":"f238185a8ee70f5ee654989191ca0e853395468d89f894a128f4d0f06cb3e963"} Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.046003 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9tb5z" podStartSLOduration=146.045986312 podStartE2EDuration="2m26.045986312s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:58.044593395 +0000 UTC m=+168.745941896" watchObservedRunningTime="2026-01-30 13:45:58.045986312 +0000 UTC m=+168.747334803" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.050058 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" event={"ID":"3806824c-28d3-47d4-b33f-01d9ab1239b8","Type":"ContainerStarted","Data":"5cde59f3f8f5aff2e52f56accf54a4a6faf23873b2d00104e67896a934cc7c4f"} Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.070131 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" event={"ID":"ce6b8f06-a708-4fdf-bbf3-47648cd005ea","Type":"ContainerStarted","Data":"7c75ea3fcddb215096ec65a3a642aacfc024658b46ed2c7bf0fb49de2795c068"} Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.071193 4793 patch_prober.go:28] interesting pod/console-operator-58897d9998-65rgb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.071236 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-65rgb" podUID="e8aacb4a-f044-427a-b5ef-1d4126b98a6a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.071553 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.071613 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.080448 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.090474 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.093114 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-5l76j" podStartSLOduration=146.093099259 podStartE2EDuration="2m26.093099259s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:58.069438717 +0000 UTC m=+168.770787218" watchObservedRunningTime="2026-01-30 13:45:58.093099259 +0000 UTC m=+168.794447750" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.093908 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.096960 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.59694758 +0000 UTC m=+169.298296061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.097196 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.097319 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.121157 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podStartSLOduration=146.121140745 podStartE2EDuration="2m26.121140745s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:58.093034847 +0000 UTC m=+168.794383338" watchObservedRunningTime="2026-01-30 13:45:58.121140745 +0000 UTC m=+168.822489236" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.122642 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zrj8g" podStartSLOduration=146.122636965 podStartE2EDuration="2m26.122636965s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:58.119814181 +0000 UTC m=+168.821162692" watchObservedRunningTime="2026-01-30 13:45:58.122636965 +0000 UTC m=+168.823985456" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.196296 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.198147 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.698128458 +0000 UTC m=+169.399476949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.236539 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-2lv2p" podStartSLOduration=146.236524947 podStartE2EDuration="2m26.236524947s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:58.184014697 +0000 UTC m=+168.885363198" watchObservedRunningTime="2026-01-30 13:45:58.236524947 +0000 UTC m=+168.937873438" Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.299029 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.299353 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.799338127 +0000 UTC m=+169.500686618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.404133 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.404499 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:58.904483689 +0000 UTC m=+169.605832180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.521708 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.522447 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.022432917 +0000 UTC m=+169.723781418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.623285 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.623610 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.123596974 +0000 UTC m=+169.824945455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.725789 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.726218 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.226196458 +0000 UTC m=+169.927544989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.827501 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.828240 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.328223459 +0000 UTC m=+170.029571950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:58 crc kubenswrapper[4793]: I0130 13:45:58.930702 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:58 crc kubenswrapper[4793]: E0130 13:45:58.931685 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.431665836 +0000 UTC m=+170.133014327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.032299 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.032640 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.532625488 +0000 UTC m=+170.233973979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.088538 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.088593 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.118482 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" event={"ID":"ce6b8f06-a708-4fdf-bbf3-47648cd005ea","Type":"ContainerStarted","Data":"07fe6e904f27b28fb11aac43945b2e946f813198fee541317eebcee351f6722f"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.134237 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.134848 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.634830432 +0000 UTC m=+170.336178923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.136251 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" event={"ID":"1faa169d-53de-456e-8f99-f93dc2772719","Type":"ContainerStarted","Data":"6eb4c7e76e77ed698549785ad31f9e89a2e40102afa15dc6648251bafbbd21f1"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.157497 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" event={"ID":"1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8","Type":"ContainerStarted","Data":"8201c1db636a976dafb517701da07b041385f89f0e9b3dfc309184a4b9d1d815"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.158546 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.159808 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m754g" podStartSLOduration=146.159788028 podStartE2EDuration="2m26.159788028s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.156006278 +0000 UTC m=+169.857354799" watchObservedRunningTime="2026-01-30 13:45:59.159788028 +0000 UTC m=+169.861136539" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.161690 4793 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-nb75n container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.163345 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" podUID="1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.168912 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" event={"ID":"8b6283f5-d30b-483e-8772-456b0109a14b","Type":"ContainerStarted","Data":"22de577a997dc4844e6f170b2bc451ebde5e16c1bda76d2b35fe98cc02a61e0f"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.194200 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" podStartSLOduration=146.194183822 podStartE2EDuration="2m26.194183822s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.191397489 +0000 UTC m=+169.892745980" watchObservedRunningTime="2026-01-30 13:45:59.194183822 +0000 UTC m=+169.895532313" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.196103 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" event={"ID":"c44b9aaf-de3a-48a8-8760-5553255887ac","Type":"ContainerStarted","Data":"edffb6218239025134a566d8338344713613e6e23f8b81031e4c34df8a9e9144"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.202260 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" event={"ID":"9932b998-297e-47a4-a005-ccfca0665793","Type":"ContainerStarted","Data":"6d4860beb4109c5a6235b4f3634ee65d2b53e79d766314f0fe423f9bdaa43dbc"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.204257 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" event={"ID":"7fc1ca51-0362-4492-ba07-8c5413c39deb","Type":"ContainerStarted","Data":"8d47d4bd3977502b09188b17a29126fa14b79b89d25b3bf5c619b27bbbdc4a04"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.205854 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4pnff" event={"ID":"58e495d5-6c64-4452-b05c-36e055a100b4","Type":"ContainerStarted","Data":"2f2faca3b6d19a4a83729a3b35d6dba8587348ad306bb8ddeadb8ea41b2d1c74"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.211418 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" event={"ID":"9fca2cfc-e4a0-42a0-9815-424987b55fd5","Type":"ContainerStarted","Data":"3a4833ce89933f7d1a33fedee1d652f75f0d97dbe2f3a37cf91fb091f62b0575"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.215987 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" event={"ID":"51800ff9-fe19-4a50-a272-be1de629ec82","Type":"ContainerStarted","Data":"2af8a796d982c2ee4c0edfc0b738330c2abdc9983916db7150a8f44d58fec00b"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.217968 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" event={"ID":"4a64abca-3318-4208-8edb-1474e0ba5f2f","Type":"ContainerStarted","Data":"2275a87f84b4ec94a142778010cf54bfc2388e423117a117dbf57f37d1a87794"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.218530 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.222781 4793 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-s2mcj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.223159 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" podUID="4a64abca-3318-4208-8edb-1474e0ba5f2f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.225282 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" event={"ID":"afa7929d-37a8-4fa2-9733-158cab1c40ec","Type":"ContainerStarted","Data":"7879e71671d6f7252902a061b12a530b8ba33625603b0d4d8130f0fc3d40f270"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.227955 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-988dg" event={"ID":"ff810089-efad-424c-8537-f528803767c7","Type":"ContainerStarted","Data":"243dc25471aa047bf84126b95ea2f0a80cc4fca3dfcd4b7891394dd7596496b5"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.229552 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-n9v6k" podStartSLOduration=146.22954087 podStartE2EDuration="2m26.22954087s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.227477237 +0000 UTC m=+169.928825728" watchObservedRunningTime="2026-01-30 13:45:59.22954087 +0000 UTC m=+169.930889361" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.231710 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" event={"ID":"25ebc563-7e8a-4d8f-ace8-2d6c767816cf","Type":"ContainerStarted","Data":"463d7be559b645fc1cbfa75616a507af2f5fbef950f5efe4351b0b0273f5de2e"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.232506 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.233984 4793 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fbdzm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.234109 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" podUID="25ebc563-7e8a-4d8f-ace8-2d6c767816cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.235763 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.235989 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.73596671 +0000 UTC m=+170.437315261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.242913 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" event={"ID":"d3f6bee7-a66e-4cec-83d5-6c0796a73e22","Type":"ContainerStarted","Data":"49031d31378cbe0f02a6049ef5b9da994544ea93af31a8215dd9e6d3728bf4b9"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.260401 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerStarted","Data":"e83f7454337f430495faf606622a60c225aa40f81a53c0c6d2b0f496da168c9b"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.262163 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.265637 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zd5lq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.265710 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.279486 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" event={"ID":"f30f4833-f565-4225-a45a-02c0f592c37b","Type":"ContainerStarted","Data":"e8b9bdf9e6b38b1be771498296bf4f5756c61337e164748d997dc6c85949085d"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.286433 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" podStartSLOduration=147.286415535 podStartE2EDuration="2m27.286415535s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.262031054 +0000 UTC m=+169.963379555" watchObservedRunningTime="2026-01-30 13:45:59.286415535 +0000 UTC m=+169.987764026" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.287658 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7x8ff" podStartSLOduration=148.287651346 podStartE2EDuration="2m28.287651346s" podCreationTimestamp="2026-01-30 13:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.287442261 +0000 UTC m=+169.988790762" watchObservedRunningTime="2026-01-30 13:45:59.287651346 +0000 UTC m=+169.988999837" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.305367 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" event={"ID":"10c05bcf-ffb2-4175-b323-067804ea3391","Type":"ContainerStarted","Data":"212528f818185ed34c08690d1751b643e849af81e53c1991d8ea6a0b53521695"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.316221 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" event={"ID":"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f","Type":"ContainerStarted","Data":"0da33b576395a991ab5923fecbb1f6438080aff6f085708f99e9123cfd200b10"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.320033 4793 generic.go:334] "Generic (PLEG): container finished" podID="4ce07df7-af19-4334-b704-818df47958a1" containerID="faf76e32a21b3409d88a29e026fbc6a735f3e18018e820a84116c9565adccbb0" exitCode=0 Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.320113 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" event={"ID":"4ce07df7-af19-4334-b704-818df47958a1","Type":"ContainerDied","Data":"faf76e32a21b3409d88a29e026fbc6a735f3e18018e820a84116c9565adccbb0"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.332857 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jb6f2" podStartSLOduration=146.332838874 podStartE2EDuration="2m26.332838874s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.331216301 +0000 UTC m=+170.032564792" watchObservedRunningTime="2026-01-30 13:45:59.332838874 +0000 UTC m=+170.034187365" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.337922 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.340336 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.84032403 +0000 UTC m=+170.541672521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.347820 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" event={"ID":"6db0dcc6-874c-40f9-a0b7-309149c78f48","Type":"ContainerStarted","Data":"0003a0f96b0d450dcabcfae0a5907ebc6be8013da3e854ca4f0bce212cb173a6"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.371692 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tkbqv" podStartSLOduration=146.371668923 podStartE2EDuration="2m26.371668923s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.364873965 +0000 UTC m=+170.066222456" watchObservedRunningTime="2026-01-30 13:45:59.371668923 +0000 UTC m=+170.073017414" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.379932 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" event={"ID":"daa9599a-67b0-421e-8add-0656c0b98af2","Type":"ContainerStarted","Data":"bc009a25495fdc317d5944d28e57adf1be4a457a67969ccdec1e58e68e1cee5e"} Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.381226 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.388702 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.436859 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" podStartSLOduration=59.436840355 podStartE2EDuration="59.436840355s" podCreationTimestamp="2026-01-30 13:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.4362375 +0000 UTC m=+170.137586001" watchObservedRunningTime="2026-01-30 13:45:59.436840355 +0000 UTC m=+170.138188846" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.437185 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-988dg" podStartSLOduration=7.437180044 podStartE2EDuration="7.437180044s" podCreationTimestamp="2026-01-30 13:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.413579044 +0000 UTC m=+170.114927555" watchObservedRunningTime="2026-01-30 13:45:59.437180044 +0000 UTC m=+170.138528535" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.440561 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.441551 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:45:59.941533178 +0000 UTC m=+170.642881679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.525082 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wj2bx" podStartSLOduration=146.525036062 podStartE2EDuration="2m26.525036062s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.515884581 +0000 UTC m=+170.217233082" watchObservedRunningTime="2026-01-30 13:45:59.525036062 +0000 UTC m=+170.226384553" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.542832 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.549326 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.049311069 +0000 UTC m=+170.750659660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.621118 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" podStartSLOduration=148.621098516 podStartE2EDuration="2m28.621098516s" podCreationTimestamp="2026-01-30 13:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.538346412 +0000 UTC m=+170.239694903" watchObservedRunningTime="2026-01-30 13:45:59.621098516 +0000 UTC m=+170.322447007" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.644834 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.645350 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.145330662 +0000 UTC m=+170.846679163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.649038 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" podStartSLOduration=146.649009969 podStartE2EDuration="2m26.649009969s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.622425671 +0000 UTC m=+170.323774152" watchObservedRunningTime="2026-01-30 13:45:59.649009969 +0000 UTC m=+170.350358460" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.674417 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podStartSLOduration=146.674397046 podStartE2EDuration="2m26.674397046s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.647838708 +0000 UTC m=+170.349187219" watchObservedRunningTime="2026-01-30 13:45:59.674397046 +0000 UTC m=+170.375745537" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.697302 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" podStartSLOduration=146.697284017 podStartE2EDuration="2m26.697284017s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.690658882 +0000 UTC m=+170.392007373" watchObservedRunningTime="2026-01-30 13:45:59.697284017 +0000 UTC m=+170.398632508" Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.746471 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.746857 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.246842228 +0000 UTC m=+170.948190719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.847164 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.847407 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.347377059 +0000 UTC m=+171.048725550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.847575 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.847941 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.347927314 +0000 UTC m=+171.049275805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.980856 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.480835155 +0000 UTC m=+171.182183646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.980738 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:45:59 crc kubenswrapper[4793]: I0130 13:45:59.981547 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:45:59 crc kubenswrapper[4793]: E0130 13:45:59.981826 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.48181858 +0000 UTC m=+171.183167071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.083792 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.083902 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.583878951 +0000 UTC m=+171.285227442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.084141 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.084462 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.584454457 +0000 UTC m=+171.285802948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.086983 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:00 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:00 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:00 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.087016 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.184615 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.184783 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.684755781 +0000 UTC m=+171.386104272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.184842 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.185203 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.685196893 +0000 UTC m=+171.386545384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.285292 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.285518 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.785504227 +0000 UTC m=+171.486852718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.387669 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.387949 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.887937218 +0000 UTC m=+171.589285709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.396196 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2lf59" event={"ID":"26050dc1-aaba-45b6-8633-015f5e4261f0","Type":"ContainerStarted","Data":"bf70278bac45a386fe1332d03c028ceb08240eb41110f1dd19708c3139c46a90"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.396239 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2lf59" event={"ID":"26050dc1-aaba-45b6-8633-015f5e4261f0","Type":"ContainerStarted","Data":"67fdf5dc2fb3bd571c6367c39c42f40ffdbc089986cdee111a376b51c566d5a4"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.406113 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-2lf59" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.406142 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.406153 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" event={"ID":"ee3323fa-00f7-45ee-8d54-040e40398b5a","Type":"ContainerStarted","Data":"d67051afae4644e435b7ff2207c4adb177e81535b02b0afe3e3f984e50a68a26"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.407332 4793 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-mgv7t container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.407369 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" podUID="ee3323fa-00f7-45ee-8d54-040e40398b5a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.408193 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" event={"ID":"afa7929d-37a8-4fa2-9733-158cab1c40ec","Type":"ContainerStarted","Data":"141da19e21a5c753ba8dbfa39952543b5be8152c8b19f7b5d722d35200e4fb3d"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.412558 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" event={"ID":"f30f4833-f565-4225-a45a-02c0f592c37b","Type":"ContainerStarted","Data":"fb9c35226649b0559845588b7db26db8e2dfcd97a94fe1995440c905f68fd6cd"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.413010 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.415629 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" event={"ID":"c44b9aaf-de3a-48a8-8760-5553255887ac","Type":"ContainerStarted","Data":"cd86524cb8e49b0001d3388b960362a26af7a64df77f22768d781a2af3bc3421"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.417907 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" event={"ID":"ea703d52-c081-418f-9343-61b68296314f","Type":"ContainerStarted","Data":"ad8067578dce7cb75b98ef59a545ba8ac0512e86c3d0bc878456ecd3ae97e490"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.421863 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v476x" podStartSLOduration=148.421844519 podStartE2EDuration="2m28.421844519s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:45:59.729600175 +0000 UTC m=+170.430948676" watchObservedRunningTime="2026-01-30 13:46:00.421844519 +0000 UTC m=+171.123193010" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.425278 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" event={"ID":"ce6b8f06-a708-4fdf-bbf3-47648cd005ea","Type":"ContainerStarted","Data":"ab744e3dc89600cd7e56f10e41fd4271475ca3626c547af8be3cb2cc2ca56ad0"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.428558 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" event={"ID":"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48","Type":"ContainerStarted","Data":"66b4a425e930d35884c64c7b600375d9acb2045b1da8048e32f9c83e9f6faf4d"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.428604 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" event={"ID":"1ee613f2-a86f-4f43-bdf9-1d25eaa2bc48","Type":"ContainerStarted","Data":"7ddc831980fb643c0d8d74a3339b16e7db8f8a48dd90c6289eb6e488d030286c"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.433667 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" event={"ID":"b72b54ef-6699-4091-b47d-f05f7c85adb2","Type":"ContainerStarted","Data":"77108c01b247508afc341e1a035c80ee33fc3e6964bbff7ee5d8fd975c7d4292"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.435372 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" event={"ID":"d2aa0043-dc77-41ca-a95f-2d119ed48053","Type":"ContainerStarted","Data":"0e81f3d1b0cf33096ac537979ab91ae70d5104a4438f6f9123572c6a18252613"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.437960 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" event={"ID":"4ce07df7-af19-4334-b704-818df47958a1","Type":"ContainerStarted","Data":"cf1f848d7f84df7f56178e6ac1fa86f072a7607f4c9c8ddf92fecb353f675afb"} Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.437988 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.442139 4793 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fbdzm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.442176 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" podUID="25ebc563-7e8a-4d8f-ace8-2d6c767816cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.442188 4793 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-s2mcj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.442235 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" podUID="4a64abca-3318-4208-8edb-1474e0ba5f2f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.442293 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.442342 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.443860 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-56g7n" podStartSLOduration=147.443845447 podStartE2EDuration="2m27.443845447s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.442562193 +0000 UTC m=+171.143910684" watchObservedRunningTime="2026-01-30 13:46:00.443845447 +0000 UTC m=+171.145193938" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.444151 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zd5lq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.444179 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.444181 4793 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-nb75n container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.444220 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" podUID="1f26bb2d-f2dd-4476-8cf1-aa710aa6cba8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.444993 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-2lf59" podStartSLOduration=8.444983017 podStartE2EDuration="8.444983017s" podCreationTimestamp="2026-01-30 13:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.420336499 +0000 UTC m=+171.121684990" watchObservedRunningTime="2026-01-30 13:46:00.444983017 +0000 UTC m=+171.146331508" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.466899 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" podStartSLOduration=147.466878512 podStartE2EDuration="2m27.466878512s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.466174053 +0000 UTC m=+171.167522554" watchObservedRunningTime="2026-01-30 13:46:00.466878512 +0000 UTC m=+171.168227003" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.490352 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" podStartSLOduration=148.490332877 podStartE2EDuration="2m28.490332877s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.487482403 +0000 UTC m=+171.188830894" watchObservedRunningTime="2026-01-30 13:46:00.490332877 +0000 UTC m=+171.191681368" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.491148 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.491215 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:00.991203991 +0000 UTC m=+171.692552482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.498568 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.502076 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.002062716 +0000 UTC m=+171.703411207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.506097 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" podStartSLOduration=147.506028391 podStartE2EDuration="2m27.506028391s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.501666476 +0000 UTC m=+171.203014997" watchObservedRunningTime="2026-01-30 13:46:00.506028391 +0000 UTC m=+171.207376882" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.570228 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q5442" podStartSLOduration=147.570207636 podStartE2EDuration="2m27.570207636s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.568748837 +0000 UTC m=+171.270097328" watchObservedRunningTime="2026-01-30 13:46:00.570207636 +0000 UTC m=+171.271556127" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.600726 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.600954 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.100926632 +0000 UTC m=+171.802275133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.601271 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.602303 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.102294228 +0000 UTC m=+171.803642719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.637672 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-mnzcq" podStartSLOduration=147.637654018 podStartE2EDuration="2m27.637654018s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.636148118 +0000 UTC m=+171.337496630" watchObservedRunningTime="2026-01-30 13:46:00.637654018 +0000 UTC m=+171.339002509" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.639192 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wzj2m" podStartSLOduration=147.639187628 podStartE2EDuration="2m27.639187628s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.606365795 +0000 UTC m=+171.307714286" watchObservedRunningTime="2026-01-30 13:46:00.639187628 +0000 UTC m=+171.340536119" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.671465 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4dv9l" podStartSLOduration=147.671448325 podStartE2EDuration="2m27.671448325s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.669388851 +0000 UTC m=+171.370737352" watchObservedRunningTime="2026-01-30 13:46:00.671448325 +0000 UTC m=+171.372796816" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.699789 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4pnff" podStartSLOduration=8.699774849 podStartE2EDuration="8.699774849s" podCreationTimestamp="2026-01-30 13:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.698142387 +0000 UTC m=+171.399490898" watchObservedRunningTime="2026-01-30 13:46:00.699774849 +0000 UTC m=+171.401123340" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.708144 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.708636 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.208616441 +0000 UTC m=+171.909964932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.729718 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lt7rr" podStartSLOduration=147.729696605 podStartE2EDuration="2m27.729696605s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.72835403 +0000 UTC m=+171.429702521" watchObservedRunningTime="2026-01-30 13:46:00.729696605 +0000 UTC m=+171.431045096" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.772239 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" podStartSLOduration=147.772220223 podStartE2EDuration="2m27.772220223s" podCreationTimestamp="2026-01-30 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.771589515 +0000 UTC m=+171.472938016" watchObservedRunningTime="2026-01-30 13:46:00.772220223 +0000 UTC m=+171.473568714" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.806102 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" podStartSLOduration=148.806083522 podStartE2EDuration="2m28.806083522s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:00.80486411 +0000 UTC m=+171.506212601" watchObservedRunningTime="2026-01-30 13:46:00.806083522 +0000 UTC m=+171.507432013" Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.809789 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.810180 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.310163789 +0000 UTC m=+172.011512290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.910705 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.910909 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.410856174 +0000 UTC m=+172.112204675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:00 crc kubenswrapper[4793]: I0130 13:46:00.911038 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:00 crc kubenswrapper[4793]: E0130 13:46:00.911514 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.411503491 +0000 UTC m=+172.112851992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.011830 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.012288 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.512268307 +0000 UTC m=+172.213616798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.083416 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:01 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:01 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:01 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.083817 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.113500 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.113825 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.613811635 +0000 UTC m=+172.315160126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.214892 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.215327 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.715308601 +0000 UTC m=+172.416657092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.315927 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.316214 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.816202871 +0000 UTC m=+172.517551362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.417205 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.417385 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.917346368 +0000 UTC m=+172.618694859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.417553 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.417922 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:01.917889992 +0000 UTC m=+172.619238493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.444293 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gsr67" event={"ID":"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e","Type":"ContainerStarted","Data":"f504e11157414eba7b106c750b8214ece0121d39cbd674056e6e2bd96e575025"} Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.445348 4793 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-mgv7t container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.445397 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" podUID="ee3323fa-00f7-45ee-8d54-040e40398b5a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.446893 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zd5lq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.446960 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.458268 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nb75n" Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.518458 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.519685 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.019671166 +0000 UTC m=+172.721019657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.620385 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.620861 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.120846224 +0000 UTC m=+172.822194715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.721036 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.721142 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.221121597 +0000 UTC m=+172.922470088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.721320 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.721591 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.22158127 +0000 UTC m=+172.922929771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.822937 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.823077 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.323040004 +0000 UTC m=+173.024388495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.823482 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.823764 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.323752703 +0000 UTC m=+173.025101194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.924781 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.924958 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.424933331 +0000 UTC m=+173.126281822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:01 crc kubenswrapper[4793]: I0130 13:46:01.925231 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:01 crc kubenswrapper[4793]: E0130 13:46:01.925508 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.425500305 +0000 UTC m=+173.126848796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.026593 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.026756 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.526721195 +0000 UTC m=+173.228069686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.026866 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.027257 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.527248438 +0000 UTC m=+173.228596929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.081020 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:02 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:02 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:02 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.081114 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.128171 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.128385 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.628350664 +0000 UTC m=+173.329699165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.128571 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.128830 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.628819157 +0000 UTC m=+173.330167648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.229970 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.230164 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.730137458 +0000 UTC m=+173.431485949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.230286 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.230563 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.730556638 +0000 UTC m=+173.431905129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.331765 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.332153 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.832128587 +0000 UTC m=+173.533477078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.332414 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.332712 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.832699032 +0000 UTC m=+173.534047523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.433190 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.433398 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.933364576 +0000 UTC m=+173.634713727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.433663 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.434012 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:02.934000853 +0000 UTC m=+173.635349344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.445187 4793 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fbdzm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.445240 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" podUID="25ebc563-7e8a-4d8f-ace8-2d6c767816cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.445374 4793 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-s2mcj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.445440 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" podUID="4a64abca-3318-4208-8edb-1474e0ba5f2f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.534927 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.535636 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.035604011 +0000 UTC m=+173.736952512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.636415 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.636680 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.136669996 +0000 UTC m=+173.838018487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.737592 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.737810 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.237779483 +0000 UTC m=+173.939127974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.737909 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.738287 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.238279095 +0000 UTC m=+173.939627586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.839455 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.839677 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.339657348 +0000 UTC m=+174.041005839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.839839 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.840157 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.340148651 +0000 UTC m=+174.041497142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.940693 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.940882 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.440856697 +0000 UTC m=+174.142205188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:02 crc kubenswrapper[4793]: I0130 13:46:02.940960 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:02 crc kubenswrapper[4793]: E0130 13:46:02.941418 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.441410531 +0000 UTC m=+174.142759022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.041893 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.042093 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.542063315 +0000 UTC m=+174.243411816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.042315 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.042699 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.542689781 +0000 UTC m=+174.244038272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.060331 4793 csr.go:261] certificate signing request csr-drqs4 is approved, waiting to be issued Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.081103 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:03 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:03 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:03 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.081160 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.097268 4793 csr.go:257] certificate signing request csr-drqs4 is issued Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.143625 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.143928 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.643913651 +0000 UTC m=+174.345262142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.245482 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.245931 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.74591559 +0000 UTC m=+174.447264081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.347015 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.347299 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.847285532 +0000 UTC m=+174.548634023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.456769 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.457095 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:03.957083266 +0000 UTC m=+174.658431757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.485166 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gsr67" event={"ID":"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e","Type":"ContainerStarted","Data":"b95be365117fbc3c51a9abafa8ddf9eb5242ebf0fda4266d57f4ed480b28135e"} Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.557364 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.557520 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.057502104 +0000 UTC m=+174.758850585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.557590 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.557936 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.057919965 +0000 UTC m=+174.759268456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.658328 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.658699 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.158670331 +0000 UTC m=+174.860018882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.658742 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.659215 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.159200286 +0000 UTC m=+174.860548777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.699636 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.700367 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.712323 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.713472 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.755831 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.760747 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.760987 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9ad6625-d668-4687-aae5-d2363abda627-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e9ad6625-d668-4687-aae5-d2363abda627\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.761084 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9ad6625-d668-4687-aae5-d2363abda627-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e9ad6625-d668-4687-aae5-d2363abda627\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.761214 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.261197215 +0000 UTC m=+174.962545706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.841196 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.842029 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.862432 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9ad6625-d668-4687-aae5-d2363abda627-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e9ad6625-d668-4687-aae5-d2363abda627\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.862514 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.862572 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9ad6625-d668-4687-aae5-d2363abda627-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e9ad6625-d668-4687-aae5-d2363abda627\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.862963 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9ad6625-d668-4687-aae5-d2363abda627-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e9ad6625-d668-4687-aae5-d2363abda627\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.863214 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.363203094 +0000 UTC m=+175.064551585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.929806 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9ad6625-d668-4687-aae5-d2363abda627-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e9ad6625-d668-4687-aae5-d2363abda627\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.978113 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.978411 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.47838672 +0000 UTC m=+175.179735211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:03 crc kubenswrapper[4793]: I0130 13:46:03.978470 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:03 crc kubenswrapper[4793]: E0130 13:46:03.979286 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.479277493 +0000 UTC m=+175.180625984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.019468 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.040153 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g9t8x"] Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.042254 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.071741 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.079116 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.079264 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.579238699 +0000 UTC m=+175.280587210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.079440 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.079701 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.5796905 +0000 UTC m=+175.281038991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.082119 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:04 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:04 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:04 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.082165 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.094075 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g9t8x"] Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.098813 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 13:41:03 +0000 UTC, rotation deadline is 2026-10-24 18:21:48.974983894 +0000 UTC Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.098879 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6412h35m44.8761246s for next certificate rotation Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.150363 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-65rgb" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.180641 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.180944 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg5zv\" (UniqueName: \"kubernetes.io/projected/b34660b0-a161-4587-96a6-1a86a2e3f632-kube-api-access-zg5zv\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.181007 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-catalog-content\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.181042 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-utilities\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.181198 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.681175876 +0000 UTC m=+175.382524367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.185538 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6qnl2"] Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.194385 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.220847 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.255419 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6qnl2"] Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.284727 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9nnp\" (UniqueName: \"kubernetes.io/projected/840c8b00-73a4-4378-b5a8-83f2595916a4-kube-api-access-p9nnp\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.284775 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg5zv\" (UniqueName: \"kubernetes.io/projected/b34660b0-a161-4587-96a6-1a86a2e3f632-kube-api-access-zg5zv\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.284808 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.284828 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-catalog-content\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.284859 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-catalog-content\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.284880 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-utilities\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.284899 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-utilities\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.287239 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.787223612 +0000 UTC m=+175.488572113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.287370 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-catalog-content\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.287734 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-utilities\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.325464 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j4vzj"] Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.326677 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.344282 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg5zv\" (UniqueName: \"kubernetes.io/projected/b34660b0-a161-4587-96a6-1a86a2e3f632-kube-api-access-zg5zv\") pod \"certified-operators-g9t8x\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.368799 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-899ps" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.378286 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4vzj"] Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.381638 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.387013 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.387139 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.887125076 +0000 UTC m=+175.588473567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.387455 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9nnp\" (UniqueName: \"kubernetes.io/projected/840c8b00-73a4-4378-b5a8-83f2595916a4-kube-api-access-p9nnp\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.387506 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.387535 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm6vk\" (UniqueName: \"kubernetes.io/projected/02ec4db2-0283-437a-999f-d50a10ab046c-kube-api-access-hm6vk\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.387565 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-catalog-content\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.387594 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-utilities\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.387621 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-catalog-content\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.387637 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-utilities\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.391139 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-catalog-content\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.391453 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.89143794 +0000 UTC m=+175.592786441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.392330 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-utilities\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.459912 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9nnp\" (UniqueName: \"kubernetes.io/projected/840c8b00-73a4-4378-b5a8-83f2595916a4-kube-api-access-p9nnp\") pod \"community-operators-6qnl2\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.463793 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.463838 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.464097 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.464117 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.489435 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.489959 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm6vk\" (UniqueName: \"kubernetes.io/projected/02ec4db2-0283-437a-999f-d50a10ab046c-kube-api-access-hm6vk\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.490021 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-catalog-content\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.490094 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-utilities\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.490932 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:04.990914123 +0000 UTC m=+175.692262614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.494132 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-utilities\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.494428 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-catalog-content\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.540584 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gsr67" event={"ID":"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e","Type":"ContainerStarted","Data":"414b16d92436bb895949171adfbbc26c557f08c47f27890387d84b19dad2dd36"} Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.541452 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9t46g"] Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.542493 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.546455 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm6vk\" (UniqueName: \"kubernetes.io/projected/02ec4db2-0283-437a-999f-d50a10ab046c-kube-api-access-hm6vk\") pod \"certified-operators-j4vzj\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.554897 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.585954 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9t46g"] Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.591543 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-catalog-content\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.591584 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2blm\" (UniqueName: \"kubernetes.io/projected/551044e9-867a-4307-a28c-ea34bab39473-kube-api-access-b2blm\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.591649 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.591833 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-utilities\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.593692 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.093645961 +0000 UTC m=+175.794994522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.692540 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.692953 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.693209 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.193190905 +0000 UTC m=+175.894539386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.693294 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-catalog-content\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.693325 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2blm\" (UniqueName: \"kubernetes.io/projected/551044e9-867a-4307-a28c-ea34bab39473-kube-api-access-b2blm\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.693367 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.693475 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-utilities\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.695362 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.195352932 +0000 UTC m=+175.896701423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.728625 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.728670 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.757256 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.799679 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.801228 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.301204992 +0000 UTC m=+176.002553543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.821008 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.821072 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.845245 4793 patch_prober.go:28] interesting pod/console-f9d7485db-kknzc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.845321 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-kknzc" podUID="69c74b2a-9812-42cf-90b7-b431e2b5f5cf" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.879130 4793 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cwwfj container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]log ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]etcd ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/max-in-flight-filter ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 30 13:46:04 crc kubenswrapper[4793]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 30 13:46:04 crc kubenswrapper[4793]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/project.openshift.io-projectcache ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/openshift.io-startinformers ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 30 13:46:04 crc kubenswrapper[4793]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 13:46:04 crc kubenswrapper[4793]: livez check failed Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.879190 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" podUID="ea703d52-c081-418f-9343-61b68296314f" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.889405 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-utilities\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.889405 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-catalog-content\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.899859 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2blm\" (UniqueName: \"kubernetes.io/projected/551044e9-867a-4307-a28c-ea34bab39473-kube-api-access-b2blm\") pod \"community-operators-9t46g\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.903069 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:04 crc kubenswrapper[4793]: E0130 13:46:04.904884 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.404868495 +0000 UTC m=+176.106216976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.905140 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:46:04 crc kubenswrapper[4793]: I0130 13:46:04.943090 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 13:46:04 crc kubenswrapper[4793]: W0130 13:46:04.987176 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode9ad6625_d668_4687_aae5_d2363abda627.slice/crio-8015b0546ef3f98dfbde3c8621c176730ee95ee7767185d6e04f9b83c4d7ae4e WatchSource:0}: Error finding container 8015b0546ef3f98dfbde3c8621c176730ee95ee7767185d6e04f9b83c4d7ae4e: Status 404 returned error can't find the container with id 8015b0546ef3f98dfbde3c8621c176730ee95ee7767185d6e04f9b83c4d7ae4e Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.006215 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.006631 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.506613838 +0000 UTC m=+176.207962329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.013783 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.072814 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.078910 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.085614 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:05 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:05 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:05 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.085677 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.108075 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.108468 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.608452993 +0000 UTC m=+176.309801484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.209389 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.210219 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.710190075 +0000 UTC m=+176.411538576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.313159 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.313509 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.813497789 +0000 UTC m=+176.514846280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.414641 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.415253 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:05.915239322 +0000 UTC m=+176.616587813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.518877 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.519260 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.019247484 +0000 UTC m=+176.720595975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.533139 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fbdzm" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.545426 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gsr67" event={"ID":"6e8eea51-5cd4-4a66-9d0e-fc9fb115807e","Type":"ContainerStarted","Data":"c3fdd23e324e7fe9c6a51444399362039955a7540651b25e89debd5484d5d7b2"} Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.547276 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e9ad6625-d668-4687-aae5-d2363abda627","Type":"ContainerStarted","Data":"8015b0546ef3f98dfbde3c8621c176730ee95ee7767185d6e04f9b83c4d7ae4e"} Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.560220 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s5tx" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.621607 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.623239 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.123226424 +0000 UTC m=+176.824574915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.722774 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.723213 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.223190981 +0000 UTC m=+176.924539532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.783377 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mgv7t" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.795215 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.823439 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.824109 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.324093581 +0000 UTC m=+177.025442072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.874153 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gsr67" podStartSLOduration=13.874135586 podStartE2EDuration="13.874135586s" podCreationTimestamp="2026-01-30 13:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:05.873031147 +0000 UTC m=+176.574379638" watchObservedRunningTime="2026-01-30 13:46:05.874135586 +0000 UTC m=+176.575484077" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.908872 4793 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 13:46:05 crc kubenswrapper[4793]: I0130 13:46:05.924874 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:05 crc kubenswrapper[4793]: E0130 13:46:05.925225 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.425212727 +0000 UTC m=+177.126561218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.025964 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:06 crc kubenswrapper[4793]: E0130 13:46:06.026402 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.526384794 +0000 UTC m=+177.227733275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.082812 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:06 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:06 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:06 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.082857 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.083889 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g9t8x"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.110559 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4vzj"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.140944 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:06 crc kubenswrapper[4793]: E0130 13:46:06.141296 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.641284332 +0000 UTC m=+177.342632823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.173006 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kvlgd"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.173968 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.185398 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.244649 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:06 crc kubenswrapper[4793]: E0130 13:46:06.244969 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.744943225 +0000 UTC m=+177.446291716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.246660 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-utilities\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.246886 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-catalog-content\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.247003 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.247132 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhvt4\" (UniqueName: \"kubernetes.io/projected/08b55ba0-087d-42ec-a0c5-538f0a3c0987-kube-api-access-nhvt4\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: E0130 13:46:06.247375 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.747365939 +0000 UTC m=+177.448714430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.299111 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvlgd"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.321479 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6qnl2"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.347267 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9t46g"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.347786 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.348119 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhvt4\" (UniqueName: \"kubernetes.io/projected/08b55ba0-087d-42ec-a0c5-538f0a3c0987-kube-api-access-nhvt4\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.348211 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-utilities\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.348379 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-catalog-content\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: E0130 13:46:06.348777 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.848755373 +0000 UTC m=+177.550103924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.348930 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-catalog-content\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.349103 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-utilities\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.400063 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhvt4\" (UniqueName: \"kubernetes.io/projected/08b55ba0-087d-42ec-a0c5-538f0a3c0987-kube-api-access-nhvt4\") pod \"redhat-marketplace-kvlgd\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: W0130 13:46:06.408499 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod551044e9_867a_4307_a28c_ea34bab39473.slice/crio-2755c7eacfd017f81d392f7b77b2261e36a1e0f02e74ee8dd73cb61fa736268b WatchSource:0}: Error finding container 2755c7eacfd017f81d392f7b77b2261e36a1e0f02e74ee8dd73cb61fa736268b: Status 404 returned error can't find the container with id 2755c7eacfd017f81d392f7b77b2261e36a1e0f02e74ee8dd73cb61fa736268b Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.449579 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:06 crc kubenswrapper[4793]: E0130 13:46:06.449964 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:46:06.949952811 +0000 UTC m=+177.651301302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pfnjs" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.529772 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mn7sx"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.537265 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.537752 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.550745 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:06 crc kubenswrapper[4793]: E0130 13:46:06.551233 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:46:07.051216861 +0000 UTC m=+177.752565342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.578606 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn7sx"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.593621 4793 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T13:46:05.909141125Z","Handler":null,"Name":""} Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.600210 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t46g" event={"ID":"551044e9-867a-4307-a28c-ea34bab39473","Type":"ContainerStarted","Data":"2755c7eacfd017f81d392f7b77b2261e36a1e0f02e74ee8dd73cb61fa736268b"} Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.620292 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qnl2" event={"ID":"840c8b00-73a4-4378-b5a8-83f2595916a4","Type":"ContainerStarted","Data":"c106e074002678528ae31ccdf1bb58932690b2a742055da2e9f297d7f5cc6c7c"} Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.628922 4793 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.628958 4793 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.636367 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9t8x" event={"ID":"b34660b0-a161-4587-96a6-1a86a2e3f632","Type":"ContainerStarted","Data":"0e22ed488b0d95eaf0cf80ba9106bf9da157b5ab0630c5fce06e88b1a1a7e207"} Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.643684 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vzj" event={"ID":"02ec4db2-0283-437a-999f-d50a10ab046c","Type":"ContainerStarted","Data":"ee249470c28be7e643027b7d1d76ee1a880e2751bfa6c780b72800ea7daeb066"} Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.653987 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-catalog-content\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.654092 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.654149 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-utilities\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.654215 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn89t\" (UniqueName: \"kubernetes.io/projected/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-kube-api-access-mn89t\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.658854 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e9ad6625-d668-4687-aae5-d2363abda627","Type":"ContainerStarted","Data":"8b8825b53f65bff81a9400879a415d5b1dc1d84fe8464a986eee69eada339360"} Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.696320 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.696302212 podStartE2EDuration="3.696302212s" podCreationTimestamp="2026-01-30 13:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:06.694436802 +0000 UTC m=+177.395785313" watchObservedRunningTime="2026-01-30 13:46:06.696302212 +0000 UTC m=+177.397650703" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.755956 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-utilities\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.756100 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn89t\" (UniqueName: \"kubernetes.io/projected/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-kube-api-access-mn89t\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.756173 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-catalog-content\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.756440 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-utilities\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.757190 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-catalog-content\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.788846 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn89t\" (UniqueName: \"kubernetes.io/projected/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-kube-api-access-mn89t\") pod \"redhat-marketplace-mn7sx\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.914674 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.992466 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.993398 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.998785 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 13:46:06 crc kubenswrapper[4793]: I0130 13:46:06.999039 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.006110 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.006164 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.009901 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.060537 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8886f940-a230-480f-a911-8caa96286196-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8886f940-a230-480f-a911-8caa96286196\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.060604 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8886f940-a230-480f-a911-8caa96286196-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8886f940-a230-480f-a911-8caa96286196\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.078938 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvlgd"] Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.084308 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:07 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:07 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:07 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.084353 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.099814 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pfnjs\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.118605 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vn6kf"] Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.120643 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.125534 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.128492 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vn6kf"] Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.161738 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.161935 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8886f940-a230-480f-a911-8caa96286196-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8886f940-a230-480f-a911-8caa96286196\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.162007 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8886f940-a230-480f-a911-8caa96286196-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8886f940-a230-480f-a911-8caa96286196\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.162428 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8886f940-a230-480f-a911-8caa96286196-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8886f940-a230-480f-a911-8caa96286196\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.167685 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.199557 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8886f940-a230-480f-a911-8caa96286196-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8886f940-a230-480f-a911-8caa96286196\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.262872 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwrln\" (UniqueName: \"kubernetes.io/projected/89a43c58-d327-429a-96cd-9f9f5393368a-kube-api-access-pwrln\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.262957 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-catalog-content\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.262998 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-utilities\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.318857 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.338251 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn7sx"] Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.364012 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwrln\" (UniqueName: \"kubernetes.io/projected/89a43c58-d327-429a-96cd-9f9f5393368a-kube-api-access-pwrln\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.364098 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-catalog-content\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.364125 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-utilities\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.385014 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.416222 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-utilities\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.416494 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-catalog-content\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.428650 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwrln\" (UniqueName: \"kubernetes.io/projected/89a43c58-d327-429a-96cd-9f9f5393368a-kube-api-access-pwrln\") pod \"redhat-operators-vn6kf\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.456338 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.519087 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fxl8f"] Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.520280 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.530669 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fxl8f"] Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.569666 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-catalog-content\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.569714 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-utilities\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.569792 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w4dd\" (UniqueName: \"kubernetes.io/projected/0005ba9f-0f70-4df4-b588-8e6f941fec61-kube-api-access-2w4dd\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.657241 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.671207 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w4dd\" (UniqueName: \"kubernetes.io/projected/0005ba9f-0f70-4df4-b588-8e6f941fec61-kube-api-access-2w4dd\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.671293 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-catalog-content\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.672116 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-catalog-content\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.672138 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-utilities\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.672516 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-utilities\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.691406 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qnl2" event={"ID":"840c8b00-73a4-4378-b5a8-83f2595916a4","Type":"ContainerStarted","Data":"f652789a637248503c2fc91700a36ad3f9de2a0dc0aa687e53dccfa3f8c0a8b5"} Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.718361 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w4dd\" (UniqueName: \"kubernetes.io/projected/0005ba9f-0f70-4df4-b588-8e6f941fec61-kube-api-access-2w4dd\") pod \"redhat-operators-fxl8f\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.723451 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9t8x" event={"ID":"b34660b0-a161-4587-96a6-1a86a2e3f632","Type":"ContainerStarted","Data":"3b482005c537462a0ede36ab68d9d608d2121842b0870338080990e3d66e4059"} Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.742333 4793 generic.go:334] "Generic (PLEG): container finished" podID="02ec4db2-0283-437a-999f-d50a10ab046c" containerID="9d4a750d40d93b392b9501779e0e72734cfa6f671669f4891033addc84b52774" exitCode=0 Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.742420 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vzj" event={"ID":"02ec4db2-0283-437a-999f-d50a10ab046c","Type":"ContainerDied","Data":"9d4a750d40d93b392b9501779e0e72734cfa6f671669f4891033addc84b52774"} Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.756395 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.777151 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvlgd" event={"ID":"08b55ba0-087d-42ec-a0c5-538f0a3c0987","Type":"ContainerStarted","Data":"e438cc892f7ad0406801bd88b27ea7d9474a125c514f11d8ac2ab76f42215f27"} Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.822094 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t46g" event={"ID":"551044e9-867a-4307-a28c-ea34bab39473","Type":"ContainerStarted","Data":"ad13ab2dd584826367febbb63bb47fc2488d332ee67905dd6b329b48680fd011"} Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.826931 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn7sx" event={"ID":"96451b9c-e42f-43ae-9f62-bc830fa1ad9d","Type":"ContainerStarted","Data":"097e24f55ac27743bd9630217aba68c9f9433798eb25d4a7ca41ee8c4336a653"} Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.848344 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:46:07 crc kubenswrapper[4793]: I0130 13:46:07.889405 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vn6kf"] Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.000128 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pfnjs"] Jan 30 13:46:08 crc kubenswrapper[4793]: W0130 13:46:08.028888 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6e18cea_cac6_4eb8_b8de_2885fcf57497.slice/crio-a08f554d2033f377796937c2541b63cf2f56fd0fbab97d4b3c4a88316aa86471 WatchSource:0}: Error finding container a08f554d2033f377796937c2541b63cf2f56fd0fbab97d4b3c4a88316aa86471: Status 404 returned error can't find the container with id a08f554d2033f377796937c2541b63cf2f56fd0fbab97d4b3c4a88316aa86471 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.101536 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:08 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:08 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:08 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.101609 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.135666 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fxl8f"] Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.404869 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.831941 4793 generic.go:334] "Generic (PLEG): container finished" podID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerID="bf4b42ce53f022eba5077f61f642433a8e1373279291fcdbe9bff308d17c0e0d" exitCode=0 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.831980 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvlgd" event={"ID":"08b55ba0-087d-42ec-a0c5-538f0a3c0987","Type":"ContainerDied","Data":"bf4b42ce53f022eba5077f61f642433a8e1373279291fcdbe9bff308d17c0e0d"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.838355 4793 generic.go:334] "Generic (PLEG): container finished" podID="551044e9-867a-4307-a28c-ea34bab39473" containerID="ad13ab2dd584826367febbb63bb47fc2488d332ee67905dd6b329b48680fd011" exitCode=0 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.838996 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t46g" event={"ID":"551044e9-867a-4307-a28c-ea34bab39473","Type":"ContainerDied","Data":"ad13ab2dd584826367febbb63bb47fc2488d332ee67905dd6b329b48680fd011"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.848709 4793 generic.go:334] "Generic (PLEG): container finished" podID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerID="6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf" exitCode=0 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.848761 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn7sx" event={"ID":"96451b9c-e42f-43ae-9f62-bc830fa1ad9d","Type":"ContainerDied","Data":"6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.850205 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.853012 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" event={"ID":"d6e18cea-cac6-4eb8-b8de-2885fcf57497","Type":"ContainerStarted","Data":"a08f554d2033f377796937c2541b63cf2f56fd0fbab97d4b3c4a88316aa86471"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.862054 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-cwwfj" Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.868489 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxl8f" event={"ID":"0005ba9f-0f70-4df4-b588-8e6f941fec61","Type":"ContainerStarted","Data":"13f1368c8d56c2f3e8a8787fdd36533c727a2ee0ef9f036522e165e8dc981e1f"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.880251 4793 generic.go:334] "Generic (PLEG): container finished" podID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerID="3b482005c537462a0ede36ab68d9d608d2121842b0870338080990e3d66e4059" exitCode=0 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.880317 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9t8x" event={"ID":"b34660b0-a161-4587-96a6-1a86a2e3f632","Type":"ContainerDied","Data":"3b482005c537462a0ede36ab68d9d608d2121842b0870338080990e3d66e4059"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.885654 4793 generic.go:334] "Generic (PLEG): container finished" podID="e9ad6625-d668-4687-aae5-d2363abda627" containerID="8b8825b53f65bff81a9400879a415d5b1dc1d84fe8464a986eee69eada339360" exitCode=0 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.885727 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e9ad6625-d668-4687-aae5-d2363abda627","Type":"ContainerDied","Data":"8b8825b53f65bff81a9400879a415d5b1dc1d84fe8464a986eee69eada339360"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.890188 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8886f940-a230-480f-a911-8caa96286196","Type":"ContainerStarted","Data":"2a36caa8c6f67671e2dde28b9bd4479d99be637b04d8c44f3c236b38be207c24"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.892221 4793 generic.go:334] "Generic (PLEG): container finished" podID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerID="f652789a637248503c2fc91700a36ad3f9de2a0dc0aa687e53dccfa3f8c0a8b5" exitCode=0 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.893152 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qnl2" event={"ID":"840c8b00-73a4-4378-b5a8-83f2595916a4","Type":"ContainerDied","Data":"f652789a637248503c2fc91700a36ad3f9de2a0dc0aa687e53dccfa3f8c0a8b5"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.898211 4793 generic.go:334] "Generic (PLEG): container finished" podID="6db0dcc6-874c-40f9-a0b7-309149c78f48" containerID="0003a0f96b0d450dcabcfae0a5907ebc6be8013da3e854ca4f0bce212cb173a6" exitCode=0 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.898310 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" event={"ID":"6db0dcc6-874c-40f9-a0b7-309149c78f48","Type":"ContainerDied","Data":"0003a0f96b0d450dcabcfae0a5907ebc6be8013da3e854ca4f0bce212cb173a6"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.905223 4793 generic.go:334] "Generic (PLEG): container finished" podID="89a43c58-d327-429a-96cd-9f9f5393368a" containerID="1292ed33cb4910e7379d650e9bdaa57110f788906801a44590e292cca7705790" exitCode=0 Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.905441 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vn6kf" event={"ID":"89a43c58-d327-429a-96cd-9f9f5393368a","Type":"ContainerDied","Data":"1292ed33cb4910e7379d650e9bdaa57110f788906801a44590e292cca7705790"} Jan 30 13:46:08 crc kubenswrapper[4793]: I0130 13:46:08.905547 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vn6kf" event={"ID":"89a43c58-d327-429a-96cd-9f9f5393368a","Type":"ContainerStarted","Data":"1f4643d93c77f9c1fa9d15f80b1a4b9e9c2ad2fc279deeae64b1715da148c011"} Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.084749 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:09 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:09 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:09 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.084816 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.917713 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8886f940-a230-480f-a911-8caa96286196","Type":"ContainerStarted","Data":"eb700f355f93bc4ce723121dea6e4b20a49a9db0e924cab9c3f4211a583c1f98"} Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.920372 4793 generic.go:334] "Generic (PLEG): container finished" podID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerID="11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e" exitCode=0 Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.920440 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxl8f" event={"ID":"0005ba9f-0f70-4df4-b588-8e6f941fec61","Type":"ContainerDied","Data":"11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e"} Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.930297 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" event={"ID":"d6e18cea-cac6-4eb8-b8de-2885fcf57497","Type":"ContainerStarted","Data":"2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3"} Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.930345 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.949176 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.949159035 podStartE2EDuration="3.949159035s" podCreationTimestamp="2026-01-30 13:46:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:09.935189058 +0000 UTC m=+180.636537559" watchObservedRunningTime="2026-01-30 13:46:09.949159035 +0000 UTC m=+180.650507526" Jan 30 13:46:09 crc kubenswrapper[4793]: I0130 13:46:09.969826 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" podStartSLOduration=157.969810538 podStartE2EDuration="2m37.969810538s" podCreationTimestamp="2026-01-30 13:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:46:09.967842326 +0000 UTC m=+180.669190817" watchObservedRunningTime="2026-01-30 13:46:09.969810538 +0000 UTC m=+180.671159029" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.082266 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:10 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:10 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:10 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.082341 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.259522 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.266896 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.323090 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9ad6625-d668-4687-aae5-d2363abda627-kube-api-access\") pod \"e9ad6625-d668-4687-aae5-d2363abda627\" (UID: \"e9ad6625-d668-4687-aae5-d2363abda627\") " Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.326271 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qxpm\" (UniqueName: \"kubernetes.io/projected/6db0dcc6-874c-40f9-a0b7-309149c78f48-kube-api-access-2qxpm\") pod \"6db0dcc6-874c-40f9-a0b7-309149c78f48\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.326387 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6db0dcc6-874c-40f9-a0b7-309149c78f48-config-volume\") pod \"6db0dcc6-874c-40f9-a0b7-309149c78f48\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.326419 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6db0dcc6-874c-40f9-a0b7-309149c78f48-secret-volume\") pod \"6db0dcc6-874c-40f9-a0b7-309149c78f48\" (UID: \"6db0dcc6-874c-40f9-a0b7-309149c78f48\") " Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.326471 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9ad6625-d668-4687-aae5-d2363abda627-kubelet-dir\") pod \"e9ad6625-d668-4687-aae5-d2363abda627\" (UID: \"e9ad6625-d668-4687-aae5-d2363abda627\") " Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.327161 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db0dcc6-874c-40f9-a0b7-309149c78f48-config-volume" (OuterVolumeSpecName: "config-volume") pod "6db0dcc6-874c-40f9-a0b7-309149c78f48" (UID: "6db0dcc6-874c-40f9-a0b7-309149c78f48"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.327468 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6db0dcc6-874c-40f9-a0b7-309149c78f48-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.327501 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9ad6625-d668-4687-aae5-d2363abda627-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e9ad6625-d668-4687-aae5-d2363abda627" (UID: "e9ad6625-d668-4687-aae5-d2363abda627"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.340902 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6db0dcc6-874c-40f9-a0b7-309149c78f48-kube-api-access-2qxpm" (OuterVolumeSpecName: "kube-api-access-2qxpm") pod "6db0dcc6-874c-40f9-a0b7-309149c78f48" (UID: "6db0dcc6-874c-40f9-a0b7-309149c78f48"). InnerVolumeSpecName "kube-api-access-2qxpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.343176 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9ad6625-d668-4687-aae5-d2363abda627-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e9ad6625-d668-4687-aae5-d2363abda627" (UID: "e9ad6625-d668-4687-aae5-d2363abda627"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.347395 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6db0dcc6-874c-40f9-a0b7-309149c78f48-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6db0dcc6-874c-40f9-a0b7-309149c78f48" (UID: "6db0dcc6-874c-40f9-a0b7-309149c78f48"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.428301 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9ad6625-d668-4687-aae5-d2363abda627-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.428364 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9ad6625-d668-4687-aae5-d2363abda627-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.428392 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qxpm\" (UniqueName: \"kubernetes.io/projected/6db0dcc6-874c-40f9-a0b7-309149c78f48-kube-api-access-2qxpm\") on node \"crc\" DevicePath \"\"" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.428404 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6db0dcc6-874c-40f9-a0b7-309149c78f48-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.639201 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-2lf59" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.935136 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" event={"ID":"6db0dcc6-874c-40f9-a0b7-309149c78f48","Type":"ContainerDied","Data":"02184320f6531b0c82ba4d167218eef7190463e44618fd9bd7006fada9858678"} Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.935176 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02184320f6531b0c82ba4d167218eef7190463e44618fd9bd7006fada9858678" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.935280 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.939498 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.939509 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e9ad6625-d668-4687-aae5-d2363abda627","Type":"ContainerDied","Data":"8015b0546ef3f98dfbde3c8621c176730ee95ee7767185d6e04f9b83c4d7ae4e"} Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.940015 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8015b0546ef3f98dfbde3c8621c176730ee95ee7767185d6e04f9b83c4d7ae4e" Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.942386 4793 generic.go:334] "Generic (PLEG): container finished" podID="8886f940-a230-480f-a911-8caa96286196" containerID="eb700f355f93bc4ce723121dea6e4b20a49a9db0e924cab9c3f4211a583c1f98" exitCode=0 Jan 30 13:46:10 crc kubenswrapper[4793]: I0130 13:46:10.942429 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8886f940-a230-480f-a911-8caa96286196","Type":"ContainerDied","Data":"eb700f355f93bc4ce723121dea6e4b20a49a9db0e924cab9c3f4211a583c1f98"} Jan 30 13:46:11 crc kubenswrapper[4793]: I0130 13:46:11.083847 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:11 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:11 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:11 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:11 crc kubenswrapper[4793]: I0130 13:46:11.083949 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.082825 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:12 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:12 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:12 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.082871 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.287279 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.359124 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8886f940-a230-480f-a911-8caa96286196-kubelet-dir\") pod \"8886f940-a230-480f-a911-8caa96286196\" (UID: \"8886f940-a230-480f-a911-8caa96286196\") " Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.360071 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8886f940-a230-480f-a911-8caa96286196-kube-api-access\") pod \"8886f940-a230-480f-a911-8caa96286196\" (UID: \"8886f940-a230-480f-a911-8caa96286196\") " Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.359325 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8886f940-a230-480f-a911-8caa96286196-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8886f940-a230-480f-a911-8caa96286196" (UID: "8886f940-a230-480f-a911-8caa96286196"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.394780 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8886f940-a230-480f-a911-8caa96286196-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8886f940-a230-480f-a911-8caa96286196" (UID: "8886f940-a230-480f-a911-8caa96286196"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.414419 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.414519 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.463201 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8886f940-a230-480f-a911-8caa96286196-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.463236 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8886f940-a230-480f-a911-8caa96286196-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.954588 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8886f940-a230-480f-a911-8caa96286196","Type":"ContainerDied","Data":"2a36caa8c6f67671e2dde28b9bd4479d99be637b04d8c44f3c236b38be207c24"} Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.954627 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a36caa8c6f67671e2dde28b9bd4479d99be637b04d8c44f3c236b38be207c24" Jan 30 13:46:12 crc kubenswrapper[4793]: I0130 13:46:12.954699 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:46:13 crc kubenswrapper[4793]: I0130 13:46:13.087669 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:13 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:13 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:13 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:13 crc kubenswrapper[4793]: I0130 13:46:13.087732 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:14 crc kubenswrapper[4793]: I0130 13:46:14.080562 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:14 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:14 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:14 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:14 crc kubenswrapper[4793]: I0130 13:46:14.080909 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:14 crc kubenswrapper[4793]: I0130 13:46:14.463101 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:14 crc kubenswrapper[4793]: I0130 13:46:14.463165 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:14 crc kubenswrapper[4793]: I0130 13:46:14.463109 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:14 crc kubenswrapper[4793]: I0130 13:46:14.463536 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:14 crc kubenswrapper[4793]: I0130 13:46:14.821281 4793 patch_prober.go:28] interesting pod/console-f9d7485db-kknzc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 30 13:46:14 crc kubenswrapper[4793]: I0130 13:46:14.821337 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-kknzc" podUID="69c74b2a-9812-42cf-90b7-b431e2b5f5cf" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 30 13:46:15 crc kubenswrapper[4793]: I0130 13:46:15.082224 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:15 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:15 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:15 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:15 crc kubenswrapper[4793]: I0130 13:46:15.082288 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:16 crc kubenswrapper[4793]: I0130 13:46:16.081025 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:16 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:16 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:16 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:16 crc kubenswrapper[4793]: I0130 13:46:16.081093 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:17 crc kubenswrapper[4793]: I0130 13:46:17.080529 4793 patch_prober.go:28] interesting pod/router-default-5444994796-2lv2p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:46:17 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Jan 30 13:46:17 crc kubenswrapper[4793]: [+]process-running ok Jan 30 13:46:17 crc kubenswrapper[4793]: healthz check failed Jan 30 13:46:17 crc kubenswrapper[4793]: I0130 13:46:17.080589 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lv2p" podUID="0e50ecc2-1bbc-4e8c-8d46-edf8369095bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:46:18 crc kubenswrapper[4793]: I0130 13:46:18.080642 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:46:18 crc kubenswrapper[4793]: I0130 13:46:18.083090 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-2lv2p" Jan 30 13:46:21 crc kubenswrapper[4793]: I0130 13:46:21.055638 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qsdzw"] Jan 30 13:46:21 crc kubenswrapper[4793]: I0130 13:46:21.060055 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl"] Jan 30 13:46:21 crc kubenswrapper[4793]: I0130 13:46:21.067174 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" containerID="cri-o://d19f43efe0461581ea609f879abb2a31d725dd71966c84254d6bb05f0e18ea46" gracePeriod=30 Jan 30 13:46:21 crc kubenswrapper[4793]: I0130 13:46:21.067399 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" containerID="cri-o://9fce52fd4df200cd47b1ec015ae5f6e141a21db87359d7fd523e3ede8826e2ec" gracePeriod=30 Jan 30 13:46:22 crc kubenswrapper[4793]: I0130 13:46:22.038660 4793 generic.go:334] "Generic (PLEG): container finished" podID="7dbc78d6-c879-4284-89b6-169d359839bf" containerID="9fce52fd4df200cd47b1ec015ae5f6e141a21db87359d7fd523e3ede8826e2ec" exitCode=0 Jan 30 13:46:22 crc kubenswrapper[4793]: I0130 13:46:22.038741 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" event={"ID":"7dbc78d6-c879-4284-89b6-169d359839bf","Type":"ContainerDied","Data":"9fce52fd4df200cd47b1ec015ae5f6e141a21db87359d7fd523e3ede8826e2ec"} Jan 30 13:46:22 crc kubenswrapper[4793]: I0130 13:46:22.041374 4793 generic.go:334] "Generic (PLEG): container finished" podID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerID="d19f43efe0461581ea609f879abb2a31d725dd71966c84254d6bb05f0e18ea46" exitCode=0 Jan 30 13:46:22 crc kubenswrapper[4793]: I0130 13:46:22.041403 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" event={"ID":"268883cf-a27e-4b69-bd41-18f0a35c3e6a","Type":"ContainerDied","Data":"d19f43efe0461581ea609f879abb2a31d725dd71966c84254d6bb05f0e18ea46"} Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.069645 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j5zhl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.069758 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.462804 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.462874 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.462816 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.462964 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.463010 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.463601 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"f99529531b1a090c1e9f4ecee92d599c59303bd9a673012fd1cacb5057890818"} pod="openshift-console/downloads-7954f5f757-sd6hs" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.463708 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" containerID="cri-o://f99529531b1a090c1e9f4ecee92d599c59303bd9a673012fd1cacb5057890818" gracePeriod=2 Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.464226 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.464264 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.831878 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:46:24 crc kubenswrapper[4793]: I0130 13:46:24.839465 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-kknzc" Jan 30 13:46:25 crc kubenswrapper[4793]: I0130 13:46:25.009713 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:46:25 crc kubenswrapper[4793]: I0130 13:46:25.009775 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:46:25 crc kubenswrapper[4793]: I0130 13:46:25.086277 4793 generic.go:334] "Generic (PLEG): container finished" podID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerID="f99529531b1a090c1e9f4ecee92d599c59303bd9a673012fd1cacb5057890818" exitCode=0 Jan 30 13:46:25 crc kubenswrapper[4793]: I0130 13:46:25.086836 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sd6hs" event={"ID":"6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2","Type":"ContainerDied","Data":"f99529531b1a090c1e9f4ecee92d599c59303bd9a673012fd1cacb5057890818"} Jan 30 13:46:27 crc kubenswrapper[4793]: I0130 13:46:27.393313 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:46:34 crc kubenswrapper[4793]: I0130 13:46:34.070273 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j5zhl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 30 13:46:34 crc kubenswrapper[4793]: I0130 13:46:34.070343 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 30 13:46:34 crc kubenswrapper[4793]: I0130 13:46:34.463189 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:34 crc kubenswrapper[4793]: I0130 13:46:34.463260 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:35 crc kubenswrapper[4793]: I0130 13:46:35.009911 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:46:35 crc kubenswrapper[4793]: I0130 13:46:35.010281 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:46:35 crc kubenswrapper[4793]: I0130 13:46:35.751662 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8b5w" Jan 30 13:46:42 crc kubenswrapper[4793]: I0130 13:46:42.413878 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:46:42 crc kubenswrapper[4793]: I0130 13:46:42.414471 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:46:42 crc kubenswrapper[4793]: I0130 13:46:42.414518 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:46:42 crc kubenswrapper[4793]: I0130 13:46:42.415126 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:46:42 crc kubenswrapper[4793]: I0130 13:46:42.415193 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629" gracePeriod=600 Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.183958 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629" exitCode=0 Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.184032 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629"} Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.463510 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.463588 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.986090 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 13:46:44 crc kubenswrapper[4793]: E0130 13:46:44.986413 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db0dcc6-874c-40f9-a0b7-309149c78f48" containerName="collect-profiles" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.986433 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db0dcc6-874c-40f9-a0b7-309149c78f48" containerName="collect-profiles" Jan 30 13:46:44 crc kubenswrapper[4793]: E0130 13:46:44.986446 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8886f940-a230-480f-a911-8caa96286196" containerName="pruner" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.986454 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8886f940-a230-480f-a911-8caa96286196" containerName="pruner" Jan 30 13:46:44 crc kubenswrapper[4793]: E0130 13:46:44.986477 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ad6625-d668-4687-aae5-d2363abda627" containerName="pruner" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.986484 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ad6625-d668-4687-aae5-d2363abda627" containerName="pruner" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.986591 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ad6625-d668-4687-aae5-d2363abda627" containerName="pruner" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.986602 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8886f940-a230-480f-a911-8caa96286196" containerName="pruner" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.986612 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6db0dcc6-874c-40f9-a0b7-309149c78f48" containerName="collect-profiles" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.987085 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.989195 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.989602 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 13:46:44 crc kubenswrapper[4793]: I0130 13:46:44.994163 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.008975 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.009230 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.070286 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j5zhl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.070362 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.162180 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd400d07-c5a8-40c2-9c01-dab9908caf49-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fd400d07-c5a8-40c2-9c01-dab9908caf49\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.162447 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd400d07-c5a8-40c2-9c01-dab9908caf49-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fd400d07-c5a8-40c2-9c01-dab9908caf49\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.263232 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd400d07-c5a8-40c2-9c01-dab9908caf49-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fd400d07-c5a8-40c2-9c01-dab9908caf49\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.263349 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd400d07-c5a8-40c2-9c01-dab9908caf49-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fd400d07-c5a8-40c2-9c01-dab9908caf49\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.263384 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd400d07-c5a8-40c2-9c01-dab9908caf49-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fd400d07-c5a8-40c2-9c01-dab9908caf49\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.281825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd400d07-c5a8-40c2-9c01-dab9908caf49-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fd400d07-c5a8-40c2-9c01-dab9908caf49\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:46:45 crc kubenswrapper[4793]: I0130 13:46:45.313576 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.386566 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.387596 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.402914 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.553302 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kube-api-access\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.553363 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.553391 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-var-lock\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.654251 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kube-api-access\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.654364 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.654420 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-var-lock\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.654517 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-var-lock\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.654542 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.674754 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kube-api-access\") pod \"installer-9-crc\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:49 crc kubenswrapper[4793]: I0130 13:46:49.758959 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:46:54 crc kubenswrapper[4793]: I0130 13:46:54.463614 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:46:54 crc kubenswrapper[4793]: I0130 13:46:54.463925 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:46:55 crc kubenswrapper[4793]: I0130 13:46:55.009526 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:46:55 crc kubenswrapper[4793]: I0130 13:46:55.009587 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:46:55 crc kubenswrapper[4793]: I0130 13:46:55.070576 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j5zhl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 13:46:55 crc kubenswrapper[4793]: I0130 13:46:55.070633 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 13:46:57 crc kubenswrapper[4793]: E0130 13:46:57.874542 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea: Get \"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea\": context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 13:46:57 crc kubenswrapper[4793]: E0130 13:46:57.875427 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2blm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9t46g_openshift-marketplace(551044e9-867a-4307-a28c-ea34bab39473): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea: Get \"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea\": context canceled" logger="UnhandledError" Jan 30 13:46:57 crc kubenswrapper[4793]: E0130 13:46:57.876698 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea: Get \\\"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:375463ce314e9870c2ef316f6ae8ec2bad821721d7dac5d2800db42bce264bea\\\": context canceled\"" pod="openshift-marketplace/community-operators-9t46g" podUID="551044e9-867a-4307-a28c-ea34bab39473" Jan 30 13:47:03 crc kubenswrapper[4793]: E0130 13:47:03.910018 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9t46g" podUID="551044e9-867a-4307-a28c-ea34bab39473" Jan 30 13:47:04 crc kubenswrapper[4793]: I0130 13:47:04.463663 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:04 crc kubenswrapper[4793]: I0130 13:47:04.463724 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:05 crc kubenswrapper[4793]: I0130 13:47:05.011437 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:47:05 crc kubenswrapper[4793]: I0130 13:47:05.011496 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:47:05 crc kubenswrapper[4793]: I0130 13:47:05.070897 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j5zhl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: i/o timeout" start-of-body= Jan 30 13:47:05 crc kubenswrapper[4793]: I0130 13:47:05.070957 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: i/o timeout" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.334790 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" event={"ID":"7dbc78d6-c879-4284-89b6-169d359839bf","Type":"ContainerDied","Data":"029de3b1f28797b6cbbf4b7545deaf6781dd6b3401588287ec9fa2ad62c13962"} Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.335321 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="029de3b1f28797b6cbbf4b7545deaf6781dd6b3401588287ec9fa2ad62c13962" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.439697 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.490954 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl"] Jan 30 13:47:13 crc kubenswrapper[4793]: E0130 13:47:13.491399 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.491416 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.491614 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" containerName="route-controller-manager" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.492170 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.497526 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl"] Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.614122 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-config\") pod \"7dbc78d6-c879-4284-89b6-169d359839bf\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.614182 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbc78d6-c879-4284-89b6-169d359839bf-serving-cert\") pod \"7dbc78d6-c879-4284-89b6-169d359839bf\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.614204 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-client-ca\") pod \"7dbc78d6-c879-4284-89b6-169d359839bf\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.614236 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mhtj\" (UniqueName: \"kubernetes.io/projected/7dbc78d6-c879-4284-89b6-169d359839bf-kube-api-access-9mhtj\") pod \"7dbc78d6-c879-4284-89b6-169d359839bf\" (UID: \"7dbc78d6-c879-4284-89b6-169d359839bf\") " Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.614487 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-config\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.614540 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-client-ca\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.614597 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94q78\" (UniqueName: \"kubernetes.io/projected/11837748-ddd9-46ac-8f23-b0b77c511c39-kube-api-access-94q78\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.614628 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11837748-ddd9-46ac-8f23-b0b77c511c39-serving-cert\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.615337 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-config" (OuterVolumeSpecName: "config") pod "7dbc78d6-c879-4284-89b6-169d359839bf" (UID: "7dbc78d6-c879-4284-89b6-169d359839bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.615845 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-client-ca" (OuterVolumeSpecName: "client-ca") pod "7dbc78d6-c879-4284-89b6-169d359839bf" (UID: "7dbc78d6-c879-4284-89b6-169d359839bf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.621707 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dbc78d6-c879-4284-89b6-169d359839bf-kube-api-access-9mhtj" (OuterVolumeSpecName: "kube-api-access-9mhtj") pod "7dbc78d6-c879-4284-89b6-169d359839bf" (UID: "7dbc78d6-c879-4284-89b6-169d359839bf"). InnerVolumeSpecName "kube-api-access-9mhtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.625034 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbc78d6-c879-4284-89b6-169d359839bf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7dbc78d6-c879-4284-89b6-169d359839bf" (UID: "7dbc78d6-c879-4284-89b6-169d359839bf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.716571 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94q78\" (UniqueName: \"kubernetes.io/projected/11837748-ddd9-46ac-8f23-b0b77c511c39-kube-api-access-94q78\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.716629 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11837748-ddd9-46ac-8f23-b0b77c511c39-serving-cert\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.716686 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-config\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.716728 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-client-ca\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.716880 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.717144 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbc78d6-c879-4284-89b6-169d359839bf-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.717167 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbc78d6-c879-4284-89b6-169d359839bf-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.717177 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mhtj\" (UniqueName: \"kubernetes.io/projected/7dbc78d6-c879-4284-89b6-169d359839bf-kube-api-access-9mhtj\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.717806 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-client-ca\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.718419 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-config\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.723851 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11837748-ddd9-46ac-8f23-b0b77c511c39-serving-cert\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.733289 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94q78\" (UniqueName: \"kubernetes.io/projected/11837748-ddd9-46ac-8f23-b0b77c511c39-kube-api-access-94q78\") pod \"route-controller-manager-674655ccb6-8dlkl\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:13 crc kubenswrapper[4793]: I0130 13:47:13.817148 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:14 crc kubenswrapper[4793]: I0130 13:47:14.337515 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl" Jan 30 13:47:14 crc kubenswrapper[4793]: I0130 13:47:14.368971 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl"] Jan 30 13:47:14 crc kubenswrapper[4793]: I0130 13:47:14.371486 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j5zhl"] Jan 30 13:47:14 crc kubenswrapper[4793]: I0130 13:47:14.404952 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dbc78d6-c879-4284-89b6-169d359839bf" path="/var/lib/kubelet/pods/7dbc78d6-c879-4284-89b6-169d359839bf/volumes" Jan 30 13:47:14 crc kubenswrapper[4793]: I0130 13:47:14.463229 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:14 crc kubenswrapper[4793]: I0130 13:47:14.463282 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:14 crc kubenswrapper[4793]: E0130 13:47:14.652580 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 13:47:14 crc kubenswrapper[4793]: E0130 13:47:14.652713 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9nnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6qnl2_openshift-marketplace(840c8b00-73a4-4378-b5a8-83f2595916a4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:47:14 crc kubenswrapper[4793]: E0130 13:47:14.653880 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6qnl2" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" Jan 30 13:47:16 crc kubenswrapper[4793]: I0130 13:47:16.009577 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qsdzw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded" start-of-body= Jan 30 13:47:16 crc kubenswrapper[4793]: I0130 13:47:16.009979 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded" Jan 30 13:47:18 crc kubenswrapper[4793]: E0130 13:47:18.993468 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6qnl2" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" Jan 30 13:47:19 crc kubenswrapper[4793]: E0130 13:47:19.064519 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 13:47:19 crc kubenswrapper[4793]: E0130 13:47:19.064677 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2w4dd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-fxl8f_openshift-marketplace(0005ba9f-0f70-4df4-b588-8e6f941fec61): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:47:19 crc kubenswrapper[4793]: E0130 13:47:19.066057 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-fxl8f" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" Jan 30 13:47:19 crc kubenswrapper[4793]: I0130 13:47:19.178280 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 13:47:22 crc kubenswrapper[4793]: E0130 13:47:22.059933 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-fxl8f" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" Jan 30 13:47:22 crc kubenswrapper[4793]: E0130 13:47:22.591410 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 13:47:22 crc kubenswrapper[4793]: E0130 13:47:22.591589 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zg5zv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g9t8x_openshift-marketplace(b34660b0-a161-4587-96a6-1a86a2e3f632): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:47:22 crc kubenswrapper[4793]: E0130 13:47:22.592945 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-g9t8x" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" Jan 30 13:47:22 crc kubenswrapper[4793]: E0130 13:47:22.737319 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 13:47:22 crc kubenswrapper[4793]: E0130 13:47:22.737494 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hm6vk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-j4vzj_openshift-marketplace(02ec4db2-0283-437a-999f-d50a10ab046c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:47:22 crc kubenswrapper[4793]: E0130 13:47:22.738716 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-j4vzj" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" Jan 30 13:47:23 crc kubenswrapper[4793]: W0130 13:47:23.547975 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfd400d07_c5a8_40c2_9c01_dab9908caf49.slice/crio-f0c5de85690b60b1af61dd311dd1196ccd5e50683ae6a8ca24fed10893d3d8c9 WatchSource:0}: Error finding container f0c5de85690b60b1af61dd311dd1196ccd5e50683ae6a8ca24fed10893d3d8c9: Status 404 returned error can't find the container with id f0c5de85690b60b1af61dd311dd1196ccd5e50683ae6a8ca24fed10893d3d8c9 Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.549266 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-j4vzj" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.549798 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-g9t8x" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.635820 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.635969 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhvt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kvlgd_openshift-marketplace(08b55ba0-087d-42ec-a0c5-538f0a3c0987): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.637369 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kvlgd" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.680796 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.730451 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.730602 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mn89t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mn7sx_openshift-marketplace(96451b9c-e42f-43ae-9f62-bc830fa1ad9d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.731851 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mn7sx" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.749114 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-74b476d486-lccjp"] Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.749990 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.750037 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.750461 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" containerName="controller-manager" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.752998 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.776528 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/268883cf-a27e-4b69-bd41-18f0a35c3e6a-serving-cert\") pod \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.776666 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-config\") pod \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.776722 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmq77\" (UniqueName: \"kubernetes.io/projected/268883cf-a27e-4b69-bd41-18f0a35c3e6a-kube-api-access-xmq77\") pod \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.776765 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-proxy-ca-bundles\") pod \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.776820 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-client-ca\") pod \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\" (UID: \"268883cf-a27e-4b69-bd41-18f0a35c3e6a\") " Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.777144 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-proxy-ca-bundles\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.777248 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clpjz\" (UniqueName: \"kubernetes.io/projected/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-kube-api-access-clpjz\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.777289 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-client-ca\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.777364 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-serving-cert\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.777414 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-config\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.781579 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-config" (OuterVolumeSpecName: "config") pod "268883cf-a27e-4b69-bd41-18f0a35c3e6a" (UID: "268883cf-a27e-4b69-bd41-18f0a35c3e6a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.781806 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "268883cf-a27e-4b69-bd41-18f0a35c3e6a" (UID: "268883cf-a27e-4b69-bd41-18f0a35c3e6a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.781883 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-client-ca" (OuterVolumeSpecName: "client-ca") pod "268883cf-a27e-4b69-bd41-18f0a35c3e6a" (UID: "268883cf-a27e-4b69-bd41-18f0a35c3e6a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.786468 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268883cf-a27e-4b69-bd41-18f0a35c3e6a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "268883cf-a27e-4b69-bd41-18f0a35c3e6a" (UID: "268883cf-a27e-4b69-bd41-18f0a35c3e6a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.788392 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/268883cf-a27e-4b69-bd41-18f0a35c3e6a-kube-api-access-xmq77" (OuterVolumeSpecName: "kube-api-access-xmq77") pod "268883cf-a27e-4b69-bd41-18f0a35c3e6a" (UID: "268883cf-a27e-4b69-bd41-18f0a35c3e6a"). InnerVolumeSpecName "kube-api-access-xmq77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.790715 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74b476d486-lccjp"] Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.871429 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.871980 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pwrln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vn6kf_openshift-marketplace(89a43c58-d327-429a-96cd-9f9f5393368a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:47:23 crc kubenswrapper[4793]: E0130 13:47:23.873677 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vn6kf" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.880882 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-proxy-ca-bundles\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881023 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clpjz\" (UniqueName: \"kubernetes.io/projected/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-kube-api-access-clpjz\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881125 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-client-ca\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881208 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-serving-cert\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881260 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-config\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881322 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmq77\" (UniqueName: \"kubernetes.io/projected/268883cf-a27e-4b69-bd41-18f0a35c3e6a-kube-api-access-xmq77\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881336 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881347 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881377 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/268883cf-a27e-4b69-bd41-18f0a35c3e6a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.881389 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/268883cf-a27e-4b69-bd41-18f0a35c3e6a-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.882902 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-proxy-ca-bundles\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.883197 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-client-ca\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.883364 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-config\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.887546 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-serving-cert\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:23 crc kubenswrapper[4793]: I0130 13:47:23.896951 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clpjz\" (UniqueName: \"kubernetes.io/projected/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-kube-api-access-clpjz\") pod \"controller-manager-74b476d486-lccjp\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.072195 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.122581 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.125044 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl"] Jan 30 13:47:24 crc kubenswrapper[4793]: W0130 13:47:24.136424 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfbfc4931_01b5_4cc0_a5f5_c3d4e42121a5.slice/crio-e8e047f8a8f147431c44c82ab17ef01b1add23ce519a6f0480d69181bc2cb61e WatchSource:0}: Error finding container e8e047f8a8f147431c44c82ab17ef01b1add23ce519a6f0480d69181bc2cb61e: Status 404 returned error can't find the container with id e8e047f8a8f147431c44c82ab17ef01b1add23ce519a6f0480d69181bc2cb61e Jan 30 13:47:24 crc kubenswrapper[4793]: W0130 13:47:24.153389 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11837748_ddd9_46ac_8f23_b0b77c511c39.slice/crio-7dc9d90c1797415bdef39e7d33ab7879a133a25249498487ec03f24fae4459fc WatchSource:0}: Error finding container 7dc9d90c1797415bdef39e7d33ab7879a133a25249498487ec03f24fae4459fc: Status 404 returned error can't find the container with id 7dc9d90c1797415bdef39e7d33ab7879a133a25249498487ec03f24fae4459fc Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.358094 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74b476d486-lccjp"] Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.387299 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"eb80942b6e6f56f06d5a97a5c92cee45946524669b2d3f8777363114c1c78ea4"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.390198 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" event={"ID":"11837748-ddd9-46ac-8f23-b0b77c511c39","Type":"ContainerStarted","Data":"7dc9d90c1797415bdef39e7d33ab7879a133a25249498487ec03f24fae4459fc"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.391602 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fd400d07-c5a8-40c2-9c01-dab9908caf49","Type":"ContainerStarted","Data":"e5b939e411d2d32f4a5a28df3de1f1b782b1984cc3579e1a45fcab992aaff3dd"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.391638 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fd400d07-c5a8-40c2-9c01-dab9908caf49","Type":"ContainerStarted","Data":"f0c5de85690b60b1af61dd311dd1196ccd5e50683ae6a8ca24fed10893d3d8c9"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.393250 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" event={"ID":"bb9452c1-1f30-4fd9-aaf3-49fd8266818d","Type":"ContainerStarted","Data":"a76af574ae39e77263355b1e3c87d747ab2f9d1604f79be4a37d4e9cca505251"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.396062 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" event={"ID":"268883cf-a27e-4b69-bd41-18f0a35c3e6a","Type":"ContainerDied","Data":"86ef773c0816c089c75665928f1abef5c6f766f515abfa5bb1d78513d4527722"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.396112 4793 scope.go:117] "RemoveContainer" containerID="d19f43efe0461581ea609f879abb2a31d725dd71966c84254d6bb05f0e18ea46" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.396122 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qsdzw" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.413430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t46g" event={"ID":"551044e9-867a-4307-a28c-ea34bab39473","Type":"ContainerStarted","Data":"8badd89e5ba818e3190858ac0610210fba8c0135f1eed3a6d67ab9234d8a776d"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.422820 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sd6hs" event={"ID":"6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2","Type":"ContainerStarted","Data":"df43223f45f3ca6f694981bf211205045b8b9092bfab58e6c8f7a89f5b8ccd87"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.424278 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.425958 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.432754 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.460987 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=40.460973325 podStartE2EDuration="40.460973325s" podCreationTimestamp="2026-01-30 13:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:47:24.426187833 +0000 UTC m=+255.127536324" watchObservedRunningTime="2026-01-30 13:47:24.460973325 +0000 UTC m=+255.162321816" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.461609 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5","Type":"ContainerStarted","Data":"e8e047f8a8f147431c44c82ab17ef01b1add23ce519a6f0480d69181bc2cb61e"} Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.474748 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.474910 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:24 crc kubenswrapper[4793]: E0130 13:47:24.476256 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vn6kf" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.476920 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.477036 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:24 crc kubenswrapper[4793]: E0130 13:47:24.480317 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mn7sx" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" Jan 30 13:47:24 crc kubenswrapper[4793]: E0130 13:47:24.482270 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kvlgd" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.592938 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qsdzw"] Jan 30 13:47:24 crc kubenswrapper[4793]: I0130 13:47:24.595630 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qsdzw"] Jan 30 13:47:25 crc kubenswrapper[4793]: I0130 13:47:25.470970 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" event={"ID":"11837748-ddd9-46ac-8f23-b0b77c511c39","Type":"ContainerStarted","Data":"f20e6d0a2f5f4dcf508e55d955774b064398a8134d06063fb2bd0bca37715f3b"} Jan 30 13:47:25 crc kubenswrapper[4793]: I0130 13:47:25.479832 4793 generic.go:334] "Generic (PLEG): container finished" podID="551044e9-867a-4307-a28c-ea34bab39473" containerID="8badd89e5ba818e3190858ac0610210fba8c0135f1eed3a6d67ab9234d8a776d" exitCode=0 Jan 30 13:47:25 crc kubenswrapper[4793]: I0130 13:47:25.481295 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t46g" event={"ID":"551044e9-867a-4307-a28c-ea34bab39473","Type":"ContainerDied","Data":"8badd89e5ba818e3190858ac0610210fba8c0135f1eed3a6d67ab9234d8a776d"} Jan 30 13:47:25 crc kubenswrapper[4793]: I0130 13:47:25.481435 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:25 crc kubenswrapper[4793]: I0130 13:47:25.481611 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.412179 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="268883cf-a27e-4b69-bd41-18f0a35c3e6a" path="/var/lib/kubelet/pods/268883cf-a27e-4b69-bd41-18f0a35c3e6a/volumes" Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.486690 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" event={"ID":"bb9452c1-1f30-4fd9-aaf3-49fd8266818d","Type":"ContainerStarted","Data":"6dc475d841ad7ccf7189817179fb736d89bc63690c21b60627e67fc5789a286b"} Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.488596 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5","Type":"ContainerStarted","Data":"0618ff92ae5b40adca08a74a83a3ae1b7472aacf6d9f5ce203122d3b72de0111"} Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.490681 4793 generic.go:334] "Generic (PLEG): container finished" podID="fd400d07-c5a8-40c2-9c01-dab9908caf49" containerID="e5b939e411d2d32f4a5a28df3de1f1b782b1984cc3579e1a45fcab992aaff3dd" exitCode=0 Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.490750 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fd400d07-c5a8-40c2-9c01-dab9908caf49","Type":"ContainerDied","Data":"e5b939e411d2d32f4a5a28df3de1f1b782b1984cc3579e1a45fcab992aaff3dd"} Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.491608 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.491669 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.492013 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.500894 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:47:26 crc kubenswrapper[4793]: I0130 13:47:26.582094 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" podStartSLOduration=45.582074244 podStartE2EDuration="45.582074244s" podCreationTimestamp="2026-01-30 13:46:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:47:26.579607683 +0000 UTC m=+257.280956174" watchObservedRunningTime="2026-01-30 13:47:26.582074244 +0000 UTC m=+257.283422755" Jan 30 13:47:27 crc kubenswrapper[4793]: I0130 13:47:27.514797 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" podStartSLOduration=47.51478177 podStartE2EDuration="47.51478177s" podCreationTimestamp="2026-01-30 13:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:47:27.514486663 +0000 UTC m=+258.215835154" watchObservedRunningTime="2026-01-30 13:47:27.51478177 +0000 UTC m=+258.216130261" Jan 30 13:47:27 crc kubenswrapper[4793]: I0130 13:47:27.714161 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:47:27 crc kubenswrapper[4793]: I0130 13:47:27.757628 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd400d07-c5a8-40c2-9c01-dab9908caf49-kubelet-dir\") pod \"fd400d07-c5a8-40c2-9c01-dab9908caf49\" (UID: \"fd400d07-c5a8-40c2-9c01-dab9908caf49\") " Jan 30 13:47:27 crc kubenswrapper[4793]: I0130 13:47:27.757689 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd400d07-c5a8-40c2-9c01-dab9908caf49-kube-api-access\") pod \"fd400d07-c5a8-40c2-9c01-dab9908caf49\" (UID: \"fd400d07-c5a8-40c2-9c01-dab9908caf49\") " Jan 30 13:47:27 crc kubenswrapper[4793]: I0130 13:47:27.758064 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd400d07-c5a8-40c2-9c01-dab9908caf49-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fd400d07-c5a8-40c2-9c01-dab9908caf49" (UID: "fd400d07-c5a8-40c2-9c01-dab9908caf49"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:27 crc kubenswrapper[4793]: I0130 13:47:27.763035 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd400d07-c5a8-40c2-9c01-dab9908caf49-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fd400d07-c5a8-40c2-9c01-dab9908caf49" (UID: "fd400d07-c5a8-40c2-9c01-dab9908caf49"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:47:27 crc kubenswrapper[4793]: I0130 13:47:27.859376 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd400d07-c5a8-40c2-9c01-dab9908caf49-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:27 crc kubenswrapper[4793]: I0130 13:47:27.859411 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd400d07-c5a8-40c2-9c01-dab9908caf49-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:47:28 crc kubenswrapper[4793]: I0130 13:47:28.502269 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fd400d07-c5a8-40c2-9c01-dab9908caf49","Type":"ContainerDied","Data":"f0c5de85690b60b1af61dd311dd1196ccd5e50683ae6a8ca24fed10893d3d8c9"} Jan 30 13:47:28 crc kubenswrapper[4793]: I0130 13:47:28.502599 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0c5de85690b60b1af61dd311dd1196ccd5e50683ae6a8ca24fed10893d3d8c9" Jan 30 13:47:28 crc kubenswrapper[4793]: I0130 13:47:28.502360 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:47:30 crc kubenswrapper[4793]: I0130 13:47:30.437903 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=41.437874958 podStartE2EDuration="41.437874958s" podCreationTimestamp="2026-01-30 13:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:47:28.520482109 +0000 UTC m=+259.221830620" watchObservedRunningTime="2026-01-30 13:47:30.437874958 +0000 UTC m=+261.139223529" Jan 30 13:47:31 crc kubenswrapper[4793]: I0130 13:47:31.529314 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t46g" event={"ID":"551044e9-867a-4307-a28c-ea34bab39473","Type":"ContainerStarted","Data":"bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03"} Jan 30 13:47:32 crc kubenswrapper[4793]: I0130 13:47:32.558246 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9t46g" podStartSLOduration=7.54184761 podStartE2EDuration="1m28.558226709s" podCreationTimestamp="2026-01-30 13:46:04 +0000 UTC" firstStartedPulling="2026-01-30 13:46:08.844176541 +0000 UTC m=+179.545525032" lastFinishedPulling="2026-01-30 13:47:29.86055563 +0000 UTC m=+260.561904131" observedRunningTime="2026-01-30 13:47:32.553913759 +0000 UTC m=+263.255262270" watchObservedRunningTime="2026-01-30 13:47:32.558226709 +0000 UTC m=+263.259575200" Jan 30 13:47:34 crc kubenswrapper[4793]: I0130 13:47:34.072565 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:34 crc kubenswrapper[4793]: I0130 13:47:34.078068 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:47:34 crc kubenswrapper[4793]: I0130 13:47:34.463347 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:34 crc kubenswrapper[4793]: I0130 13:47:34.463405 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-sd6hs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 30 13:47:34 crc kubenswrapper[4793]: I0130 13:47:34.463641 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:34 crc kubenswrapper[4793]: I0130 13:47:34.463690 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sd6hs" podUID="6e9a73cf-3a15-4a72-9d5a-2cdd62318ea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 30 13:47:34 crc kubenswrapper[4793]: I0130 13:47:34.905874 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:47:34 crc kubenswrapper[4793]: I0130 13:47:34.906246 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:47:36 crc kubenswrapper[4793]: I0130 13:47:36.685327 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9t46g" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="registry-server" probeResult="failure" output=< Jan 30 13:47:36 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 13:47:36 crc kubenswrapper[4793]: > Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.378741 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.379142 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.379240 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.381227 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.381540 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.381820 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.391595 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.395538 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.410216 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.433963 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.480333 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.484410 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.517610 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.527760 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:47:43 crc kubenswrapper[4793]: I0130 13:47:43.536563 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:47:44 crc kubenswrapper[4793]: I0130 13:47:44.481362 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-sd6hs" Jan 30 13:47:44 crc kubenswrapper[4793]: I0130 13:47:44.993409 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:47:45 crc kubenswrapper[4793]: I0130 13:47:45.035195 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:47:45 crc kubenswrapper[4793]: I0130 13:47:45.222189 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9t46g"] Jan 30 13:47:46 crc kubenswrapper[4793]: I0130 13:47:46.597553 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9t46g" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="registry-server" containerID="cri-o://bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03" gracePeriod=2 Jan 30 13:47:49 crc kubenswrapper[4793]: I0130 13:47:48.613109 4793 generic.go:334] "Generic (PLEG): container finished" podID="551044e9-867a-4307-a28c-ea34bab39473" containerID="bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03" exitCode=0 Jan 30 13:47:49 crc kubenswrapper[4793]: I0130 13:47:48.613390 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t46g" event={"ID":"551044e9-867a-4307-a28c-ea34bab39473","Type":"ContainerDied","Data":"bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03"} Jan 30 13:47:54 crc kubenswrapper[4793]: E0130 13:47:54.906365 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03 is running failed: container process not found" containerID="bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 13:47:54 crc kubenswrapper[4793]: E0130 13:47:54.908127 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03 is running failed: container process not found" containerID="bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 13:47:54 crc kubenswrapper[4793]: E0130 13:47:54.908725 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03 is running failed: container process not found" containerID="bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 13:47:54 crc kubenswrapper[4793]: E0130 13:47:54.908834 4793 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-9t46g" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="registry-server" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.028889 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.029487 4793 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.031215 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="extract-utilities" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.031237 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="extract-utilities" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.031257 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="registry-server" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.031265 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="registry-server" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.031280 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="extract-content" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.031287 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="extract-content" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.031300 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd400d07-c5a8-40c2-9c01-dab9908caf49" containerName="pruner" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.031308 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd400d07-c5a8-40c2-9c01-dab9908caf49" containerName="pruner" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.031434 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="551044e9-867a-4307-a28c-ea34bab39473" containerName="registry-server" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.031450 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd400d07-c5a8-40c2-9c01-dab9908caf49" containerName="pruner" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036413 4793 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036459 4793 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.036655 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036666 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.036679 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036684 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.036695 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036701 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.036709 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036714 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.036721 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036727 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.036736 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036742 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.036809 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036827 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.036839 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036845 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036971 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036981 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036991 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.036999 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.037006 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.037015 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.037200 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.037329 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.038965 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6" gracePeriod=15 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.040031 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03" gracePeriod=15 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.040099 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01" gracePeriod=15 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.040143 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690" gracePeriod=15 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.040183 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995" gracePeriod=15 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.058675 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.066956 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-utilities\") pod \"551044e9-867a-4307-a28c-ea34bab39473\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.067085 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2blm\" (UniqueName: \"kubernetes.io/projected/551044e9-867a-4307-a28c-ea34bab39473-kube-api-access-b2blm\") pod \"551044e9-867a-4307-a28c-ea34bab39473\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.067131 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-catalog-content\") pod \"551044e9-867a-4307-a28c-ea34bab39473\" (UID: \"551044e9-867a-4307-a28c-ea34bab39473\") " Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.068376 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.068637 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.068663 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.068682 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.068729 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.068756 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.068796 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.068823 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.075401 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-utilities" (OuterVolumeSpecName: "utilities") pod "551044e9-867a-4307-a28c-ea34bab39473" (UID: "551044e9-867a-4307-a28c-ea34bab39473"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.078297 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/551044e9-867a-4307-a28c-ea34bab39473-kube-api-access-b2blm" (OuterVolumeSpecName: "kube-api-access-b2blm") pod "551044e9-867a-4307-a28c-ea34bab39473" (UID: "551044e9-867a-4307-a28c-ea34bab39473"). InnerVolumeSpecName "kube-api-access-b2blm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.178769 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.178987 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.179805 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.179929 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.180199 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.180331 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.180486 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.180592 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.180754 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2blm\" (UniqueName: \"kubernetes.io/projected/551044e9-867a-4307-a28c-ea34bab39473-kube-api-access-b2blm\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.180853 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.180978 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.179335 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.181205 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.179418 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.181429 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.181542 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.181662 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.183470 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.198852 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.304898 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "551044e9-867a-4307-a28c-ea34bab39473" (UID: "551044e9-867a-4307-a28c-ea34bab39473"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.390734 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/551044e9-867a-4307-a28c-ea34bab39473-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.392924 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.2:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-kvlgd.188f86567077f07d openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-kvlgd,UID:08b55ba0-087d-42ec-a0c5-538f0a3c0987,APIVersion:v1,ResourceVersion:28524,FieldPath:spec.initContainers{extract-content},},Reason:Created,Message:Created container extract-content,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,LastTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.416591 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.736350 4793 generic.go:334] "Generic (PLEG): container finished" podID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" containerID="0618ff92ae5b40adca08a74a83a3ae1b7472aacf6d9f5ce203122d3b72de0111" exitCode=0 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.736463 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5","Type":"ContainerDied","Data":"0618ff92ae5b40adca08a74a83a3ae1b7472aacf6d9f5ce203122d3b72de0111"} Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.737262 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.737561 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.746234 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qnl2" event={"ID":"840c8b00-73a4-4378-b5a8-83f2595916a4","Type":"ContainerStarted","Data":"3991b8c8da8221b7422f215779cd2c7fe6fecd1213e2421f8f1c4e3c851baccd"} Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.753514 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9t8x" event={"ID":"b34660b0-a161-4587-96a6-1a86a2e3f632","Type":"ContainerStarted","Data":"0a9be6fb1fc0d8a14f1edca7b047f49698da2a9d4b0fc318118d31f74ad0506a"} Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.754358 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.754502 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.758117 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.773294 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.784374 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.785120 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03" exitCode=0 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.785137 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01" exitCode=0 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.785147 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690" exitCode=0 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.785154 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995" exitCode=2 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.785211 4793 scope.go:117] "RemoveContainer" containerID="da8bdd9133e32d0db907df290e2b1e138d2787e92453ad8caf86660b7ffa5506" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.789801 4793 generic.go:334] "Generic (PLEG): container finished" podID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerID="a39b5636265cc040beb743a7d92b7de07f6a61cbb255d62d9adbf1ef86fd75b0" exitCode=0 Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.789851 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvlgd" event={"ID":"08b55ba0-087d-42ec-a0c5-538f0a3c0987","Type":"ContainerDied","Data":"a39b5636265cc040beb743a7d92b7de07f6a61cbb255d62d9adbf1ef86fd75b0"} Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.791100 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.791352 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.791499 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.791658 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.795181 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bc1253a936a1fb130b3d0bd5a4a4e0faab053a8532b79f469a9186771a1ba586"} Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.798845 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t46g" event={"ID":"551044e9-867a-4307-a28c-ea34bab39473","Type":"ContainerDied","Data":"2755c7eacfd017f81d392f7b77b2261e36a1e0f02e74ee8dd73cb61fa736268b"} Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.798930 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t46g" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.799716 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.799863 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.800003 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.800160 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.800292 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.804314 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.821210 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.821502 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.821699 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: I0130 13:48:04.824693 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.849855 4793 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 13:48:04 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513" Netns:"/var/run/netns/df52414f-eebd-4743-9919-33beb0544a43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:04 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:04 crc kubenswrapper[4793]: > Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.849951 4793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 13:48:04 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513" Netns:"/var/run/netns/df52414f-eebd-4743-9919-33beb0544a43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:04 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:04 crc kubenswrapper[4793]: > pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.849967 4793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 13:48:04 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513" Netns:"/var/run/netns/df52414f-eebd-4743-9919-33beb0544a43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:04 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:04 crc kubenswrapper[4793]: > pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:48:04 crc kubenswrapper[4793]: E0130 13:48:04.850024 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-xd92c_openshift-network-diagnostics(3b6479f0-333b-4a96-9adf-2099afdc2447)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-xd92c_openshift-network-diagnostics(3b6479f0-333b-4a96-9adf-2099afdc2447)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513\\\" Netns:\\\"/var/run/netns/df52414f-eebd-4743-9919-33beb0544a43\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=916e08f5bfe2a441e2b0dff662354807edafaf57a4ea3256ae499e3ec75de513;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s\\\": dial tcp 38.102.83.2:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.021700 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:05Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:05Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:05Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:05Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:1be9df9846a1afdcabb94b502538e28b99b6748cc22415f1be58ab4cb7a391b8\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:9f846e202c62c9de285e0af13de8057685dff0d285709f110f88725e10d32d82\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202160358},{\\\"names\\\":[],\\\"sizeBytes\\\":1186979061},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.022109 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.022469 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.022700 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.023038 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.023090 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.045433 4793 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 13:48:05 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21" Netns:"/var/run/netns/10ba0c7f-05ef-4afe-a856-e5d6da0edfca" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:05 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:05 crc kubenswrapper[4793]: > Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.045497 4793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 13:48:05 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21" Netns:"/var/run/netns/10ba0c7f-05ef-4afe-a856-e5d6da0edfca" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:05 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:05 crc kubenswrapper[4793]: > pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.045523 4793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 13:48:05 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21" Netns:"/var/run/netns/10ba0c7f-05ef-4afe-a856-e5d6da0edfca" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:05 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:05 crc kubenswrapper[4793]: > pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.045581 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"networking-console-plugin-85b44fc459-gdk6g_openshift-network-console(5fe485a1-e14f-4c09-b5b9-f252bc42b7e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"networking-console-plugin-85b44fc459-gdk6g_openshift-network-console(5fe485a1-e14f-4c09-b5b9-f252bc42b7e8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21\\\" Netns:\\\"/var/run/netns/10ba0c7f-05ef-4afe-a856-e5d6da0edfca\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=7982fea5dcb1528b7cc11e995c908f59f6cca8a6f1f3f5c83cc44ea58f595d21;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s\\\": dial tcp 38.102.83.2:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.049830 4793 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 13:48:05 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38" Netns:"/var/run/netns/ecdd22f0-0d26-4bd9-95c3-691dc891d81b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:05 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:05 crc kubenswrapper[4793]: > Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.049900 4793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 13:48:05 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38" Netns:"/var/run/netns/ecdd22f0-0d26-4bd9-95c3-691dc891d81b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:05 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:05 crc kubenswrapper[4793]: > pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.049925 4793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 13:48:05 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38" Netns:"/var/run/netns/ecdd22f0-0d26-4bd9-95c3-691dc891d81b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:05 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:05 crc kubenswrapper[4793]: > pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:48:05 crc kubenswrapper[4793]: E0130 13:48:05.049998 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38\\\" Netns:\\\"/var/run/netns/ecdd22f0-0d26-4bd9-95c3-691dc891d81b\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=8fb6a231991dd5e985d6b1f7c116f02a9e2867813fb8d9d79c0af760d4076d38;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s\\\": dial tcp 38.102.83.2:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.806266 4793 generic.go:334] "Generic (PLEG): container finished" podID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerID="7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028" exitCode=0 Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.806361 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn7sx" event={"ID":"96451b9c-e42f-43ae-9f62-bc830fa1ad9d","Type":"ContainerDied","Data":"7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028"} Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.807334 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.807530 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.807830 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.808198 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.808368 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.808567 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.808690 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxl8f" event={"ID":"0005ba9f-0f70-4df4-b588-8e6f941fec61","Type":"ContainerStarted","Data":"0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d"} Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.810030 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.810268 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.810533 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.810836 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.811330 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.811689 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.811704 4793 generic.go:334] "Generic (PLEG): container finished" podID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerID="3991b8c8da8221b7422f215779cd2c7fe6fecd1213e2421f8f1c4e3c851baccd" exitCode=0 Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.811770 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qnl2" event={"ID":"840c8b00-73a4-4378-b5a8-83f2595916a4","Type":"ContainerDied","Data":"3991b8c8da8221b7422f215779cd2c7fe6fecd1213e2421f8f1c4e3c851baccd"} Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.811913 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.812230 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.812380 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.812611 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.812868 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.813087 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.813300 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.813515 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.813680 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.816913 4793 generic.go:334] "Generic (PLEG): container finished" podID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerID="0a9be6fb1fc0d8a14f1edca7b047f49698da2a9d4b0fc318118d31f74ad0506a" exitCode=0 Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.816967 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9t8x" event={"ID":"b34660b0-a161-4587-96a6-1a86a2e3f632","Type":"ContainerDied","Data":"0a9be6fb1fc0d8a14f1edca7b047f49698da2a9d4b0fc318118d31f74ad0506a"} Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.818139 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.818605 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.818882 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.819074 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.819358 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.819799 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.820042 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.820268 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.820616 4793 generic.go:334] "Generic (PLEG): container finished" podID="02ec4db2-0283-437a-999f-d50a10ab046c" containerID="b9519a38e06d14f0b9522f2ca7c944b5d849d5137311c5fba903cacfaefb9b67" exitCode=0 Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.820673 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vzj" event={"ID":"02ec4db2-0283-437a-999f-d50a10ab046c","Type":"ContainerDied","Data":"b9519a38e06d14f0b9522f2ca7c944b5d849d5137311c5fba903cacfaefb9b67"} Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.821482 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.821779 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.821985 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.823248 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.823510 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.823878 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.824268 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.824554 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.824983 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.825475 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vn6kf" event={"ID":"89a43c58-d327-429a-96cd-9f9f5393368a","Type":"ContainerStarted","Data":"17de5c4fa1f8a1615ce34e313bf58b61c0d69abdba7886409d1567e3fa60d503"} Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.826111 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.827763 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.828266 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.829804 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.830137 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.830337 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.830426 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205"} Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.830556 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.830862 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.831181 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.831534 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:05 crc kubenswrapper[4793]: I0130 13:48:05.987641 4793 scope.go:117] "RemoveContainer" containerID="bc9e3bacbc8abb31cf8aa4c4752afdeeff1dcf1ca92c1d16ad9d9dc43aa20b03" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.003542 4793 scope.go:117] "RemoveContainer" containerID="8badd89e5ba818e3190858ac0610210fba8c0135f1eed3a6d67ab9234d8a776d" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.062703 4793 scope.go:117] "RemoveContainer" containerID="ad13ab2dd584826367febbb63bb47fc2488d332ee67905dd6b329b48680fd011" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.078872 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.079337 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.079493 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.079751 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.080152 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.080329 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.080464 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.080601 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.080735 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.080867 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.080998 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.119359 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-var-lock\") pod \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.119429 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kube-api-access\") pod \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.119452 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kubelet-dir\") pod \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\" (UID: \"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5\") " Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.119819 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" (UID: "fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.119857 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-var-lock" (OuterVolumeSpecName: "var-lock") pod "fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" (UID: "fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.125331 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" (UID: "fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.223981 4793 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.224009 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.224018 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.841433 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.845762 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5","Type":"ContainerDied","Data":"e8e047f8a8f147431c44c82ab17ef01b1add23ce519a6f0480d69181bc2cb61e"} Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.845810 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8e047f8a8f147431c44c82ab17ef01b1add23ce519a6f0480d69181bc2cb61e" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.845785 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.846602 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.846767 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.846938 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.847495 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.847828 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.847996 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.848222 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.848434 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.848606 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.848775 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.853259 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.853633 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.854121 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.854495 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.854829 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.855130 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.855393 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.855625 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.855889 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:06 crc kubenswrapper[4793]: I0130 13:48:06.856198 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.852188 4793 generic.go:334] "Generic (PLEG): container finished" podID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerID="0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d" exitCode=0 Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.852273 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxl8f" event={"ID":"0005ba9f-0f70-4df4-b588-8e6f941fec61","Type":"ContainerDied","Data":"0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d"} Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.853824 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.854617 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.855021 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.855739 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.856992 4793 generic.go:334] "Generic (PLEG): container finished" podID="89a43c58-d327-429a-96cd-9f9f5393368a" containerID="17de5c4fa1f8a1615ce34e313bf58b61c0d69abdba7886409d1567e3fa60d503" exitCode=0 Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.857062 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vn6kf" event={"ID":"89a43c58-d327-429a-96cd-9f9f5393368a","Type":"ContainerDied","Data":"17de5c4fa1f8a1615ce34e313bf58b61c0d69abdba7886409d1567e3fa60d503"} Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.857155 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.858470 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.860207 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.860509 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.860740 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.860993 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.861434 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.862108 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.862140 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.862328 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.862584 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.862768 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6" exitCode=0 Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.862856 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.863106 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.863366 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.863574 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.863757 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:07 crc kubenswrapper[4793]: I0130 13:48:07.863920 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:08 crc kubenswrapper[4793]: E0130 13:48:08.116922 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:08 crc kubenswrapper[4793]: E0130 13:48:08.117471 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:08 crc kubenswrapper[4793]: E0130 13:48:08.117934 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:08 crc kubenswrapper[4793]: E0130 13:48:08.118290 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:08 crc kubenswrapper[4793]: E0130 13:48:08.118596 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:08 crc kubenswrapper[4793]: I0130 13:48:08.118630 4793 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 13:48:08 crc kubenswrapper[4793]: E0130 13:48:08.118912 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="200ms" Jan 30 13:48:08 crc kubenswrapper[4793]: E0130 13:48:08.320432 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="400ms" Jan 30 13:48:08 crc kubenswrapper[4793]: E0130 13:48:08.721578 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="800ms" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.206697 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.207454 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.208290 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.208579 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.208916 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.209227 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.209573 4793 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.209910 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.210322 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.210625 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.210951 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.211281 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.211520 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262089 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262156 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262207 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262249 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262305 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262396 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262763 4793 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262783 4793 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.262792 4793 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:09 crc kubenswrapper[4793]: E0130 13:48:09.522670 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="1.6s" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.875653 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.876502 4793 scope.go:117] "RemoveContainer" containerID="233700d9586291098a923ce598cfa3e80727199db39035abbb2cfe2c6019bc03" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.876649 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.892459 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.893442 4793 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.893850 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.894124 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.894476 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.894844 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.895199 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.895423 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.895795 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.896088 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.896383 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.896939 4793 scope.go:117] "RemoveContainer" containerID="a26096c3c6916df8dee3712e1d374e43eb68892f5e66ad05f37a4b80dd3abc01" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.909686 4793 scope.go:117] "RemoveContainer" containerID="ca5ef90b25c5dc0192f988e9e973c17ea94e82dcd387a0d04a02d5defc435690" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.920752 4793 scope.go:117] "RemoveContainer" containerID="a9f2244dbcf81bb9bfbd655f0fb5cecfe67df761d9b0644969b96eb29f6c3995" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.934411 4793 scope.go:117] "RemoveContainer" containerID="bc3b86e0c691c2e7dbc00dee808ed673b9a366ea53b023336c07493fa26f93e6" Jan 30 13:48:09 crc kubenswrapper[4793]: I0130 13:48:09.951109 4793 scope.go:117] "RemoveContainer" containerID="e83f62beffe2c4cc9cdbcd9878f768f74ec5b971c30c706a0ea668049db544ec" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.079407 4793 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 13:48:10 crc kubenswrapper[4793]: E0130 13:48:10.336897 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.2:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-kvlgd.188f86567077f07d openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-kvlgd,UID:08b55ba0-087d-42ec-a0c5-538f0a3c0987,APIVersion:v1,ResourceVersion:28524,FieldPath:spec.initContainers{extract-content},},Reason:Created,Message:Created container extract-content,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,LastTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.400958 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.401308 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.415725 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.415965 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.416216 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.416645 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.416943 4793 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.417351 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.417659 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.417844 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.418093 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:10 crc kubenswrapper[4793]: I0130 13:48:10.421605 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 13:48:11 crc kubenswrapper[4793]: E0130 13:48:11.124891 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="3.2s" Jan 30 13:48:14 crc kubenswrapper[4793]: E0130 13:48:14.326418 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="6.4s" Jan 30 13:48:14 crc kubenswrapper[4793]: I0130 13:48:14.904585 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvlgd" event={"ID":"08b55ba0-087d-42ec-a0c5-538f0a3c0987","Type":"ContainerStarted","Data":"539c3853e42d9d22bfa167a67e472131adad4bd97a97c725d04b9f2fb5b89b55"} Jan 30 13:48:15 crc kubenswrapper[4793]: E0130 13:48:15.247896 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:15Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:15Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:15Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:15Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:1be9df9846a1afdcabb94b502538e28b99b6748cc22415f1be58ab4cb7a391b8\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:9f846e202c62c9de285e0af13de8057685dff0d285709f110f88725e10d32d82\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202160358},{\\\"names\\\":[],\\\"sizeBytes\\\":1186979061},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: E0130 13:48:15.248400 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: E0130 13:48:15.248682 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: E0130 13:48:15.248918 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: E0130 13:48:15.249213 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: E0130 13:48:15.249243 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.911255 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.912446 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.912723 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.912942 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.913162 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.913352 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.913544 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.913736 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.913928 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:15 crc kubenswrapper[4793]: I0130 13:48:15.914140 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.538332 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.538380 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.594511 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.595122 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.595772 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.595981 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.596198 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.596387 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.596567 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.596738 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.596907 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.597092 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:16 crc kubenswrapper[4793]: I0130 13:48:16.597317 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.928177 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.928474 4793 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410" exitCode=1 Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.928505 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410"} Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.928990 4793 scope.go:117] "RemoveContainer" containerID="f705b9a4a3f2b6c774096ea56b14eb8d562b01a5e2666ebb49299e619875a410" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.929465 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.930586 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.930938 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.931199 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.931406 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.931606 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.931805 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.932190 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.932482 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.932945 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:18 crc kubenswrapper[4793]: I0130 13:48:18.933237 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.178236 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.397753 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.397780 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.397877 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.397770 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.398299 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.398394 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.398539 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.399467 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.400424 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.400638 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.400998 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.401443 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.401969 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.402412 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.406779 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.407357 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.407769 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.409142 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.419942 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.419974 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:48:19 crc kubenswrapper[4793]: E0130 13:48:19.420446 4793 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:19 crc kubenswrapper[4793]: I0130 13:48:19.421189 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:20 crc kubenswrapper[4793]: E0130 13:48:20.338781 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.2:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-kvlgd.188f86567077f07d openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-kvlgd,UID:08b55ba0-087d-42ec-a0c5-538f0a3c0987,APIVersion:v1,ResourceVersion:28524,FieldPath:spec.initContainers{extract-content},},Reason:Created,Message:Created container extract-content,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,LastTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.403633 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.404468 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.404889 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.405161 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.405551 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.405983 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.406451 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.406822 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.407106 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.407477 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.407879 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.408143 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:20 crc kubenswrapper[4793]: E0130 13:48:20.727380 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="7s" Jan 30 13:48:20 crc kubenswrapper[4793]: I0130 13:48:20.871970 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:48:25 crc kubenswrapper[4793]: E0130 13:48:25.396856 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:25Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:25Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:25Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:25Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:1be9df9846a1afdcabb94b502538e28b99b6748cc22415f1be58ab4cb7a391b8\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:9f846e202c62c9de285e0af13de8057685dff0d285709f110f88725e10d32d82\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202160358},{\\\"names\\\":[],\\\"sizeBytes\\\":1186979061},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:25 crc kubenswrapper[4793]: E0130 13:48:25.397481 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:25 crc kubenswrapper[4793]: E0130 13:48:25.397794 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:25 crc kubenswrapper[4793]: E0130 13:48:25.398034 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:25 crc kubenswrapper[4793]: E0130 13:48:25.398269 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:25 crc kubenswrapper[4793]: E0130 13:48:25.398295 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.587207 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.588465 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.588936 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.589516 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.589772 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.590014 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.590365 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.590736 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.591009 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.591335 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.591675 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.591932 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.592234 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:26 crc kubenswrapper[4793]: I0130 13:48:26.889582 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:48:27 crc kubenswrapper[4793]: E0130 13:48:27.729316 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="7s" Jan 30 13:48:30 crc kubenswrapper[4793]: E0130 13:48:30.340770 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.2:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-kvlgd.188f86567077f07d openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-kvlgd,UID:08b55ba0-087d-42ec-a0c5-538f0a3c0987,APIVersion:v1,ResourceVersion:28524,FieldPath:spec.initContainers{extract-content},},Reason:Created,Message:Created container extract-content,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,LastTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.402338 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.402884 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.403973 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.404784 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.405815 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.406496 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.406828 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.407255 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.407552 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.407812 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.408106 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:30 crc kubenswrapper[4793]: I0130 13:48:30.408369 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:34 crc kubenswrapper[4793]: E0130 13:48:34.730562 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="7s" Jan 30 13:48:35 crc kubenswrapper[4793]: E0130 13:48:35.784196 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:35Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:35Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:35Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:35Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:1be9df9846a1afdcabb94b502538e28b99b6748cc22415f1be58ab4cb7a391b8\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:9f846e202c62c9de285e0af13de8057685dff0d285709f110f88725e10d32d82\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202160358},{\\\"names\\\":[],\\\"sizeBytes\\\":1186979061},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:35 crc kubenswrapper[4793]: E0130 13:48:35.785926 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:35 crc kubenswrapper[4793]: E0130 13:48:35.786414 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:35 crc kubenswrapper[4793]: E0130 13:48:35.786663 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:35 crc kubenswrapper[4793]: E0130 13:48:35.786971 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:35 crc kubenswrapper[4793]: E0130 13:48:35.787090 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:48:40 crc kubenswrapper[4793]: E0130 13:48:40.342529 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.2:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-kvlgd.188f86567077f07d openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-kvlgd,UID:08b55ba0-087d-42ec-a0c5-538f0a3c0987,APIVersion:v1,ResourceVersion:28524,FieldPath:spec.initContainers{extract-content},},Reason:Created,Message:Created container extract-content,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,LastTimestamp:2026-01-30 13:48:04.392112253 +0000 UTC m=+295.093460744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.400288 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.400833 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.401470 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.401882 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.402525 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.403142 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.405464 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.406114 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.407241 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.407526 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.407783 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:40 crc kubenswrapper[4793]: I0130 13:48:40.408082 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:41 crc kubenswrapper[4793]: E0130 13:48:41.732529 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="7s" Jan 30 13:48:45 crc kubenswrapper[4793]: E0130 13:48:45.880008 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:45Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:45Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:45Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:48:45Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:1be9df9846a1afdcabb94b502538e28b99b6748cc22415f1be58ab4cb7a391b8\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:9f846e202c62c9de285e0af13de8057685dff0d285709f110f88725e10d32d82\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202160358},{\\\"names\\\":[],\\\"sizeBytes\\\":1186979061},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:45 crc kubenswrapper[4793]: E0130 13:48:45.880856 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:45 crc kubenswrapper[4793]: E0130 13:48:45.881117 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:45 crc kubenswrapper[4793]: E0130 13:48:45.881382 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:45 crc kubenswrapper[4793]: E0130 13:48:45.881671 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:45 crc kubenswrapper[4793]: E0130 13:48:45.881689 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:48:46 crc kubenswrapper[4793]: E0130 13:48:46.589099 4793 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 13:48:46 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11" Netns:"/var/run/netns/2f1be3ea-cce3-4fc0-9c88-27527a0cb39d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:46 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:46 crc kubenswrapper[4793]: > Jan 30 13:48:46 crc kubenswrapper[4793]: E0130 13:48:46.589341 4793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 13:48:46 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11" Netns:"/var/run/netns/2f1be3ea-cce3-4fc0-9c88-27527a0cb39d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:46 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:46 crc kubenswrapper[4793]: > pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:48:46 crc kubenswrapper[4793]: E0130 13:48:46.589363 4793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 13:48:46 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11" Netns:"/var/run/netns/2f1be3ea-cce3-4fc0-9c88-27527a0cb39d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:46 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:46 crc kubenswrapper[4793]: > pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:48:46 crc kubenswrapper[4793]: E0130 13:48:46.589429 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"networking-console-plugin-85b44fc459-gdk6g_openshift-network-console(5fe485a1-e14f-4c09-b5b9-f252bc42b7e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"networking-console-plugin-85b44fc459-gdk6g_openshift-network-console(5fe485a1-e14f-4c09-b5b9-f252bc42b7e8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11\\\" Netns:\\\"/var/run/netns/2f1be3ea-cce3-4fc0-9c88-27527a0cb39d\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=fcfa73ae73d66a42981ebf4f50dc88d88e7e3fcae045805748ab6f831446ec11;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s\\\": dial tcp 38.102.83.2:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:48:46 crc kubenswrapper[4793]: E0130 13:48:46.599415 4793 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 13:48:46 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044" Netns:"/var/run/netns/0ab308dc-b6eb-4831-a897-abd8bc6df026" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:46 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:46 crc kubenswrapper[4793]: > Jan 30 13:48:46 crc kubenswrapper[4793]: E0130 13:48:46.599481 4793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 13:48:46 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044" Netns:"/var/run/netns/0ab308dc-b6eb-4831-a897-abd8bc6df026" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:46 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:46 crc kubenswrapper[4793]: > pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:48:46 crc kubenswrapper[4793]: E0130 13:48:46.600183 4793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 13:48:46 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044" Netns:"/var/run/netns/0ab308dc-b6eb-4831-a897-abd8bc6df026" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:46 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:46 crc kubenswrapper[4793]: > pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:48:46 crc kubenswrapper[4793]: E0130 13:48:46.600311 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044\\\" Netns:\\\"/var/run/netns/0ab308dc-b6eb-4831-a897-abd8bc6df026\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=5bf932f20e720e3f7f149a3459491c99bb7cb2376b0f727132971133e7961044;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s\\\": dial tcp 38.102.83.2:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.096186 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f4b66b3a3b80510bb6d511455d0313195b10051500368abcf54792dd82c05a59"} Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.098489 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.098782 4793 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3" exitCode=1 Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.098808 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3"} Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.099368 4793 scope.go:117] "RemoveContainer" containerID="16b6f692a83467d0f05b5827e35e332340bb72c6de3c8a4c407d59af5d1075c3" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.099641 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.099910 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.100327 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.100508 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.100767 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.101216 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.101462 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.101639 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.101786 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.101928 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.102080 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.102221 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: I0130 13:48:47.102354 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:47 crc kubenswrapper[4793]: E0130 13:48:47.145621 4793 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 13:48:47 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5" Netns:"/var/run/netns/5ea5e4f3-80af-41aa-8f63-5bc42bc08ffc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:47 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:47 crc kubenswrapper[4793]: > Jan 30 13:48:47 crc kubenswrapper[4793]: E0130 13:48:47.145695 4793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 13:48:47 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5" Netns:"/var/run/netns/5ea5e4f3-80af-41aa-8f63-5bc42bc08ffc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:47 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:47 crc kubenswrapper[4793]: > pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:48:47 crc kubenswrapper[4793]: E0130 13:48:47.145717 4793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 13:48:47 crc kubenswrapper[4793]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5" Netns:"/var/run/netns/5ea5e4f3-80af-41aa-8f63-5bc42bc08ffc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.2:6443: connect: connection refused Jan 30 13:48:47 crc kubenswrapper[4793]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 13:48:47 crc kubenswrapper[4793]: > pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:48:47 crc kubenswrapper[4793]: E0130 13:48:47.145775 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-xd92c_openshift-network-diagnostics(3b6479f0-333b-4a96-9adf-2099afdc2447)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-xd92c_openshift-network-diagnostics(3b6479f0-333b-4a96-9adf-2099afdc2447)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5\\\" Netns:\\\"/var/run/netns/5ea5e4f3-80af-41aa-8f63-5bc42bc08ffc\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=28abf1937dcb28a7e9ef4c6740880d69ea971ea964bea0e3ab86602ac704bab5;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s\\\": dial tcp 38.102.83.2:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:48:48 crc kubenswrapper[4793]: E0130 13:48:48.733124 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="7s" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.112409 4793 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="6506fc668bb4ba3d37719afb4aa45245679057c496c260396a0681c5eb1ab5fd" exitCode=0 Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.112546 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"6506fc668bb4ba3d37719afb4aa45245679057c496c260396a0681c5eb1ab5fd"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.112905 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.112922 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:48:49 crc kubenswrapper[4793]: E0130 13:48:49.113343 4793 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.113369 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.113562 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.113758 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.114011 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.114231 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.114405 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.114578 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.114755 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.114928 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.115134 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.115314 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.115492 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.115668 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.120919 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxl8f" event={"ID":"0005ba9f-0f70-4df4-b588-8e6f941fec61","Type":"ContainerStarted","Data":"7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.121801 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.122029 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.122302 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.122679 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.123102 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.123263 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vzj" event={"ID":"02ec4db2-0283-437a-999f-d50a10ab046c","Type":"ContainerStarted","Data":"bca1d232355315db4731f9a23c3d510cb5c3560c5a03542708615d5cdb216d6c"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.123555 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.123809 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.124389 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.124767 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.125155 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.125573 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.125847 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.126083 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.126371 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.126550 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.126705 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.126843 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.127007 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.127180 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.127319 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.127497 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.127679 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.127830 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.127971 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.128205 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.128435 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.128765 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vn6kf" event={"ID":"89a43c58-d327-429a-96cd-9f9f5393368a","Type":"ContainerStarted","Data":"04cab8777968c78ddbe77df944f0557b099be348daaec3a0b9ff7c7f4c0c511b"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.130111 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.130372 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.130671 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.130988 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.131262 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.131563 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.131804 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.132066 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.132351 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.132561 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.132847 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.133195 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.133321 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9t8x" event={"ID":"b34660b0-a161-4587-96a6-1a86a2e3f632","Type":"ContainerStarted","Data":"393188ba22f128de9c0a011df4faebd2b1d1eb0a5b1ea461fc46bcc26c5a26e1"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.133400 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.133888 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.134127 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.134338 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.134639 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.135093 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.135315 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.135554 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.135845 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.136160 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.136246 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8d898ce2eb670ce9a98146f45c2c9134c0399865527e45c0963a3df7613fb855"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.136261 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.136555 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.138737 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.139164 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.139419 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.139865 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.140202 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.140522 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.140921 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.141037 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.141934 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.142057 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3f8251d8cc4d16af4a648c0de85dc3b7067c45868ed41fc506bb343a45b0bfda"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.142176 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.142469 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.142705 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.142973 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.143430 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.143775 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.144013 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.144317 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.144616 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.144806 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.144992 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.145264 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.145446 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.145623 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.145767 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.145906 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.146069 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.146414 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.146627 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.146784 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.146922 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.148433 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn7sx" event={"ID":"96451b9c-e42f-43ae-9f62-bc830fa1ad9d","Type":"ContainerStarted","Data":"6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.151216 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.151594 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.151933 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.152856 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.155269 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.155689 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.155925 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.156257 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.156418 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.156573 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.158577 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qnl2" event={"ID":"840c8b00-73a4-4378-b5a8-83f2595916a4","Type":"ContainerStarted","Data":"84cd655416136fa3e73cac54a43941e805b3e648275563df361a78561fee0a01"} Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.175516 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.177446 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.177499 4793 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.177525 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.193554 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.213898 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.233760 4793 status_manager.go:851] "Failed to get status for pod" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" pod="openshift-marketplace/redhat-operators-vn6kf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vn6kf\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.256700 4793 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.274278 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.294503 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.313606 4793 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.334008 4793 status_manager.go:851] "Failed to get status for pod" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" pod="openshift-marketplace/certified-operators-j4vzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4vzj\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.354249 4793 status_manager.go:851] "Failed to get status for pod" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.374191 4793 status_manager.go:851] "Failed to get status for pod" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" pod="openshift-marketplace/redhat-marketplace-kvlgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kvlgd\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.393970 4793 status_manager.go:851] "Failed to get status for pod" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" pod="openshift-marketplace/redhat-operators-fxl8f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fxl8f\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.413868 4793 status_manager.go:851] "Failed to get status for pod" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" pod="openshift-marketplace/certified-operators-g9t8x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g9t8x\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.433766 4793 status_manager.go:851] "Failed to get status for pod" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" pod="openshift-marketplace/redhat-marketplace-mn7sx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn7sx\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.454341 4793 status_manager.go:851] "Failed to get status for pod" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" pod="openshift-marketplace/community-operators-6qnl2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6qnl2\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:49 crc kubenswrapper[4793]: I0130 13:48:49.473928 4793 status_manager.go:851] "Failed to get status for pod" podUID="551044e9-867a-4307-a28c-ea34bab39473" pod="openshift-marketplace/community-operators-9t46g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9t46g\": dial tcp 38.102.83.2:6443: connect: connection refused" Jan 30 13:48:50 crc kubenswrapper[4793]: I0130 13:48:50.166133 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0530a3b6a8c1fa539f47b2b61219189174a05eda145a7977d3139dafc2f5fabc"} Jan 30 13:48:51 crc kubenswrapper[4793]: I0130 13:48:51.172895 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f2ce507d8896c9a4147fd15d2195cc8386fc0c107e2d3da6dc6b3afd7cf3a5aa"} Jan 30 13:48:53 crc kubenswrapper[4793]: I0130 13:48:53.186108 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7d58f2970981102c5de1327291e81f27036a6711b7e3ce61eeef1bc8ce66569b"} Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.192721 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f67021520d79c4b475c79918229753abd870a84a3bc800d01f5ee27b3e04943d"} Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.382219 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.382854 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.423583 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.555603 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.555683 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.595361 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.694014 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.694079 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:48:54 crc kubenswrapper[4793]: I0130 13:48:54.738981 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:48:55 crc kubenswrapper[4793]: I0130 13:48:55.201429 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a393268ebfc4150bb652a680cc053a55806d9cef1ed7d3ab4cdeee748f359c1f"} Jan 30 13:48:55 crc kubenswrapper[4793]: I0130 13:48:55.242077 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:48:55 crc kubenswrapper[4793]: I0130 13:48:55.244101 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:48:55 crc kubenswrapper[4793]: I0130 13:48:55.247710 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.218011 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.218315 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.218006 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.224377 4793 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.336514 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1f103c53-b7d9-4380-8d74-173d7a2fafbf" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.889793 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.915928 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.916160 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:48:56 crc kubenswrapper[4793]: I0130 13:48:56.968515 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.224452 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.224507 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.228702 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1f103c53-b7d9-4380-8d74-173d7a2fafbf" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.264489 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.397603 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.397970 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.457176 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.457214 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.515414 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:48:57 crc kubenswrapper[4793]: W0130 13:48:57.803293 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-70f1d253e6607cd5633d90e0b93c6f7667e68969b0899a190ae06ce3a39ece47 WatchSource:0}: Error finding container 70f1d253e6607cd5633d90e0b93c6f7667e68969b0899a190ae06ce3a39ece47: Status 404 returned error can't find the container with id 70f1d253e6607cd5633d90e0b93c6f7667e68969b0899a190ae06ce3a39ece47 Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.849712 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.850159 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:48:57 crc kubenswrapper[4793]: I0130 13:48:57.898238 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:48:58 crc kubenswrapper[4793]: I0130 13:48:58.231450 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a92c44723e724fe3d77b0711ba4590782cf5ceec156d6f06ef0f99d1495d7a42"} Jan 30 13:48:58 crc kubenswrapper[4793]: I0130 13:48:58.231523 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"70f1d253e6607cd5633d90e0b93c6f7667e68969b0899a190ae06ce3a39ece47"} Jan 30 13:48:58 crc kubenswrapper[4793]: I0130 13:48:58.270624 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:48:58 crc kubenswrapper[4793]: I0130 13:48:58.270688 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:48:59 crc kubenswrapper[4793]: I0130 13:48:59.177642 4793 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 13:48:59 crc kubenswrapper[4793]: I0130 13:48:59.177708 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 13:49:00 crc kubenswrapper[4793]: I0130 13:49:00.398319 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:49:00 crc kubenswrapper[4793]: I0130 13:49:00.413524 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:49:00 crc kubenswrapper[4793]: W0130 13:49:00.822424 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-38918ddcee4170314db0cbff959bdef64b07c64dbf2b932b651ab2d65bf442e6 WatchSource:0}: Error finding container 38918ddcee4170314db0cbff959bdef64b07c64dbf2b932b651ab2d65bf442e6: Status 404 returned error can't find the container with id 38918ddcee4170314db0cbff959bdef64b07c64dbf2b932b651ab2d65bf442e6 Jan 30 13:49:01 crc kubenswrapper[4793]: I0130 13:49:01.248357 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8eba692c15f68d62b578428268f61df3278798e68263d7e8a86a6d5171ccf708"} Jan 30 13:49:01 crc kubenswrapper[4793]: I0130 13:49:01.248656 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"38918ddcee4170314db0cbff959bdef64b07c64dbf2b932b651ab2d65bf442e6"} Jan 30 13:49:02 crc kubenswrapper[4793]: I0130 13:49:02.397336 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:49:02 crc kubenswrapper[4793]: I0130 13:49:02.397889 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:49:02 crc kubenswrapper[4793]: W0130 13:49:02.657749 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-d9696d28ff00eb6e9ce606e0dca01b21ee6c773b6487c732432a922adfd8b9c2 WatchSource:0}: Error finding container d9696d28ff00eb6e9ce606e0dca01b21ee6c773b6487c732432a922adfd8b9c2: Status 404 returned error can't find the container with id d9696d28ff00eb6e9ce606e0dca01b21ee6c773b6487c732432a922adfd8b9c2 Jan 30 13:49:03 crc kubenswrapper[4793]: I0130 13:49:03.270709 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Jan 30 13:49:03 crc kubenswrapper[4793]: I0130 13:49:03.271064 4793 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="8eba692c15f68d62b578428268f61df3278798e68263d7e8a86a6d5171ccf708" exitCode=255 Jan 30 13:49:03 crc kubenswrapper[4793]: I0130 13:49:03.271132 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"8eba692c15f68d62b578428268f61df3278798e68263d7e8a86a6d5171ccf708"} Jan 30 13:49:03 crc kubenswrapper[4793]: I0130 13:49:03.271790 4793 scope.go:117] "RemoveContainer" containerID="8eba692c15f68d62b578428268f61df3278798e68263d7e8a86a6d5171ccf708" Jan 30 13:49:03 crc kubenswrapper[4793]: I0130 13:49:03.273948 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"85a0bd544390d7ba5f391d36b711e3b22bf82d73434a81c8cd5186feadb231d6"} Jan 30 13:49:03 crc kubenswrapper[4793]: I0130 13:49:03.273999 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d9696d28ff00eb6e9ce606e0dca01b21ee6c773b6487c732432a922adfd8b9c2"} Jan 30 13:49:03 crc kubenswrapper[4793]: I0130 13:49:03.274451 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:49:04 crc kubenswrapper[4793]: I0130 13:49:04.283606 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 30 13:49:04 crc kubenswrapper[4793]: I0130 13:49:04.284908 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Jan 30 13:49:04 crc kubenswrapper[4793]: I0130 13:49:04.284973 4793 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="18c87fd30c2aa4f43e2df67f6ee4c2f95073809e41963cbaef782a613a8fbc2e" exitCode=255 Jan 30 13:49:04 crc kubenswrapper[4793]: I0130 13:49:04.285156 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"18c87fd30c2aa4f43e2df67f6ee4c2f95073809e41963cbaef782a613a8fbc2e"} Jan 30 13:49:04 crc kubenswrapper[4793]: I0130 13:49:04.285230 4793 scope.go:117] "RemoveContainer" containerID="8eba692c15f68d62b578428268f61df3278798e68263d7e8a86a6d5171ccf708" Jan 30 13:49:04 crc kubenswrapper[4793]: I0130 13:49:04.285717 4793 scope.go:117] "RemoveContainer" containerID="18c87fd30c2aa4f43e2df67f6ee4c2f95073809e41963cbaef782a613a8fbc2e" Jan 30 13:49:04 crc kubenswrapper[4793]: E0130 13:49:04.286181 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:49:05 crc kubenswrapper[4793]: I0130 13:49:05.295761 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 30 13:49:05 crc kubenswrapper[4793]: I0130 13:49:05.746467 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zd5lq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 13:49:05 crc kubenswrapper[4793]: I0130 13:49:05.746471 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zd5lq container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 13:49:05 crc kubenswrapper[4793]: I0130 13:49:05.746520 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 13:49:05 crc kubenswrapper[4793]: I0130 13:49:05.746594 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 13:49:06 crc kubenswrapper[4793]: I0130 13:49:06.303776 4793 generic.go:334] "Generic (PLEG): container finished" podID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerID="e83f7454337f430495faf606622a60c225aa40f81a53c0c6d2b0f496da168c9b" exitCode=0 Jan 30 13:49:06 crc kubenswrapper[4793]: I0130 13:49:06.303858 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerDied","Data":"e83f7454337f430495faf606622a60c225aa40f81a53c0c6d2b0f496da168c9b"} Jan 30 13:49:06 crc kubenswrapper[4793]: I0130 13:49:06.304512 4793 scope.go:117] "RemoveContainer" containerID="e83f7454337f430495faf606622a60c225aa40f81a53c0c6d2b0f496da168c9b" Jan 30 13:49:07 crc kubenswrapper[4793]: I0130 13:49:07.309944 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/1.log" Jan 30 13:49:07 crc kubenswrapper[4793]: I0130 13:49:07.310382 4793 generic.go:334] "Generic (PLEG): container finished" podID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerID="5c6a9897c4b95a29afcee12bdcee6053aceb808a8e015aa04e687cc0d82426ae" exitCode=1 Jan 30 13:49:07 crc kubenswrapper[4793]: I0130 13:49:07.310409 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerDied","Data":"5c6a9897c4b95a29afcee12bdcee6053aceb808a8e015aa04e687cc0d82426ae"} Jan 30 13:49:07 crc kubenswrapper[4793]: I0130 13:49:07.310442 4793 scope.go:117] "RemoveContainer" containerID="e83f7454337f430495faf606622a60c225aa40f81a53c0c6d2b0f496da168c9b" Jan 30 13:49:07 crc kubenswrapper[4793]: I0130 13:49:07.310961 4793 scope.go:117] "RemoveContainer" containerID="5c6a9897c4b95a29afcee12bdcee6053aceb808a8e015aa04e687cc0d82426ae" Jan 30 13:49:07 crc kubenswrapper[4793]: E0130 13:49:07.311253 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:49:08 crc kubenswrapper[4793]: I0130 13:49:08.316770 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/1.log" Jan 30 13:49:09 crc kubenswrapper[4793]: I0130 13:49:09.181104 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:49:09 crc kubenswrapper[4793]: I0130 13:49:09.185772 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:49:15 crc kubenswrapper[4793]: I0130 13:49:15.745450 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:49:15 crc kubenswrapper[4793]: I0130 13:49:15.746303 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:49:15 crc kubenswrapper[4793]: I0130 13:49:15.746405 4793 scope.go:117] "RemoveContainer" containerID="5c6a9897c4b95a29afcee12bdcee6053aceb808a8e015aa04e687cc0d82426ae" Jan 30 13:49:15 crc kubenswrapper[4793]: E0130 13:49:15.746651 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:49:16 crc kubenswrapper[4793]: I0130 13:49:16.357257 4793 scope.go:117] "RemoveContainer" containerID="5c6a9897c4b95a29afcee12bdcee6053aceb808a8e015aa04e687cc0d82426ae" Jan 30 13:49:16 crc kubenswrapper[4793]: E0130 13:49:16.357440 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:49:18 crc kubenswrapper[4793]: I0130 13:49:18.398551 4793 scope.go:117] "RemoveContainer" containerID="18c87fd30c2aa4f43e2df67f6ee4c2f95073809e41963cbaef782a613a8fbc2e" Jan 30 13:49:19 crc kubenswrapper[4793]: I0130 13:49:19.374101 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 30 13:49:19 crc kubenswrapper[4793]: I0130 13:49:19.374167 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"459e7ec9681ba9623ac0f17da5a8dbb8dcdeba668e407dc4e833dc7f04764b7e"} Jan 30 13:49:20 crc kubenswrapper[4793]: I0130 13:49:20.382229 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Jan 30 13:49:20 crc kubenswrapper[4793]: I0130 13:49:20.382997 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 30 13:49:20 crc kubenswrapper[4793]: I0130 13:49:20.383043 4793 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="459e7ec9681ba9623ac0f17da5a8dbb8dcdeba668e407dc4e833dc7f04764b7e" exitCode=255 Jan 30 13:49:20 crc kubenswrapper[4793]: I0130 13:49:20.383160 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"459e7ec9681ba9623ac0f17da5a8dbb8dcdeba668e407dc4e833dc7f04764b7e"} Jan 30 13:49:20 crc kubenswrapper[4793]: I0130 13:49:20.383213 4793 scope.go:117] "RemoveContainer" containerID="18c87fd30c2aa4f43e2df67f6ee4c2f95073809e41963cbaef782a613a8fbc2e" Jan 30 13:49:20 crc kubenswrapper[4793]: I0130 13:49:20.383705 4793 scope.go:117] "RemoveContainer" containerID="459e7ec9681ba9623ac0f17da5a8dbb8dcdeba668e407dc4e833dc7f04764b7e" Jan 30 13:49:20 crc kubenswrapper[4793]: E0130 13:49:20.383900 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:49:21 crc kubenswrapper[4793]: I0130 13:49:21.388894 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Jan 30 13:49:24 crc kubenswrapper[4793]: I0130 13:49:24.243312 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 13:49:27 crc kubenswrapper[4793]: I0130 13:49:27.202329 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 13:49:28 crc kubenswrapper[4793]: I0130 13:49:28.398258 4793 scope.go:117] "RemoveContainer" containerID="5c6a9897c4b95a29afcee12bdcee6053aceb808a8e015aa04e687cc0d82426ae" Jan 30 13:49:28 crc kubenswrapper[4793]: I0130 13:49:28.499483 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 13:49:29 crc kubenswrapper[4793]: I0130 13:49:29.104032 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 13:49:29 crc kubenswrapper[4793]: I0130 13:49:29.438563 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/2.log" Jan 30 13:49:29 crc kubenswrapper[4793]: I0130 13:49:29.439167 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/1.log" Jan 30 13:49:29 crc kubenswrapper[4793]: I0130 13:49:29.439217 4793 generic.go:334] "Generic (PLEG): container finished" podID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerID="63006967c118b34959cd3fa5d8b60266a4edaff3054eba565ec69e12ca9a1c1c" exitCode=1 Jan 30 13:49:29 crc kubenswrapper[4793]: I0130 13:49:29.439246 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerDied","Data":"63006967c118b34959cd3fa5d8b60266a4edaff3054eba565ec69e12ca9a1c1c"} Jan 30 13:49:29 crc kubenswrapper[4793]: I0130 13:49:29.439278 4793 scope.go:117] "RemoveContainer" containerID="5c6a9897c4b95a29afcee12bdcee6053aceb808a8e015aa04e687cc0d82426ae" Jan 30 13:49:29 crc kubenswrapper[4793]: I0130 13:49:29.439776 4793 scope.go:117] "RemoveContainer" containerID="63006967c118b34959cd3fa5d8b60266a4edaff3054eba565ec69e12ca9a1c1c" Jan 30 13:49:29 crc kubenswrapper[4793]: E0130 13:49:29.440008 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:49:30 crc kubenswrapper[4793]: I0130 13:49:30.455172 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/2.log" Jan 30 13:49:33 crc kubenswrapper[4793]: I0130 13:49:33.277193 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 13:49:33 crc kubenswrapper[4793]: I0130 13:49:33.398446 4793 scope.go:117] "RemoveContainer" containerID="459e7ec9681ba9623ac0f17da5a8dbb8dcdeba668e407dc4e833dc7f04764b7e" Jan 30 13:49:33 crc kubenswrapper[4793]: E0130 13:49:33.398912 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:49:33 crc kubenswrapper[4793]: I0130 13:49:33.446501 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 13:49:33 crc kubenswrapper[4793]: I0130 13:49:33.486934 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 13:49:33 crc kubenswrapper[4793]: I0130 13:49:33.538071 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:49:33 crc kubenswrapper[4793]: I0130 13:49:33.812516 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 13:49:34 crc kubenswrapper[4793]: I0130 13:49:34.037013 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 13:49:34 crc kubenswrapper[4793]: I0130 13:49:34.496802 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 13:49:34 crc kubenswrapper[4793]: I0130 13:49:34.610634 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 13:49:35 crc kubenswrapper[4793]: I0130 13:49:35.745429 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:49:35 crc kubenswrapper[4793]: I0130 13:49:35.746223 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:49:35 crc kubenswrapper[4793]: I0130 13:49:35.746732 4793 scope.go:117] "RemoveContainer" containerID="63006967c118b34959cd3fa5d8b60266a4edaff3054eba565ec69e12ca9a1c1c" Jan 30 13:49:35 crc kubenswrapper[4793]: E0130 13:49:35.747121 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:49:36 crc kubenswrapper[4793]: I0130 13:49:36.487596 4793 scope.go:117] "RemoveContainer" containerID="63006967c118b34959cd3fa5d8b60266a4edaff3054eba565ec69e12ca9a1c1c" Jan 30 13:49:36 crc kubenswrapper[4793]: E0130 13:49:36.488024 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:49:38 crc kubenswrapper[4793]: I0130 13:49:38.553215 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 13:49:40 crc kubenswrapper[4793]: I0130 13:49:40.199188 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 13:49:41 crc kubenswrapper[4793]: I0130 13:49:41.700628 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 13:49:41 crc kubenswrapper[4793]: I0130 13:49:41.876982 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 13:49:41 crc kubenswrapper[4793]: I0130 13:49:41.994627 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 13:49:42 crc kubenswrapper[4793]: I0130 13:49:42.413356 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:49:42 crc kubenswrapper[4793]: I0130 13:49:42.413416 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:49:43 crc kubenswrapper[4793]: I0130 13:49:43.078989 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 13:49:44 crc kubenswrapper[4793]: I0130 13:49:44.201439 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 13:49:44 crc kubenswrapper[4793]: I0130 13:49:44.379585 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 13:49:44 crc kubenswrapper[4793]: I0130 13:49:44.557904 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 13:49:44 crc kubenswrapper[4793]: I0130 13:49:44.566750 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 13:49:45 crc kubenswrapper[4793]: I0130 13:49:45.246900 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 13:49:45 crc kubenswrapper[4793]: I0130 13:49:45.397898 4793 scope.go:117] "RemoveContainer" containerID="459e7ec9681ba9623ac0f17da5a8dbb8dcdeba668e407dc4e833dc7f04764b7e" Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.029276 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.389504 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.517837 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.538196 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/3.log" Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.538896 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.539077 4793 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="598c516de85492fefd3748d7d01332587ed76f8169020c39af19b1708e581d68" exitCode=255 Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.539152 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"598c516de85492fefd3748d7d01332587ed76f8169020c39af19b1708e581d68"} Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.539341 4793 scope.go:117] "RemoveContainer" containerID="459e7ec9681ba9623ac0f17da5a8dbb8dcdeba668e407dc4e833dc7f04764b7e" Jan 30 13:49:46 crc kubenswrapper[4793]: I0130 13:49:46.540236 4793 scope.go:117] "RemoveContainer" containerID="598c516de85492fefd3748d7d01332587ed76f8169020c39af19b1708e581d68" Jan 30 13:49:46 crc kubenswrapper[4793]: E0130 13:49:46.540665 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:49:47 crc kubenswrapper[4793]: I0130 13:49:47.081647 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 13:49:47 crc kubenswrapper[4793]: I0130 13:49:47.545716 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/3.log" Jan 30 13:49:48 crc kubenswrapper[4793]: I0130 13:49:48.179716 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 13:49:48 crc kubenswrapper[4793]: I0130 13:49:48.397792 4793 scope.go:117] "RemoveContainer" containerID="63006967c118b34959cd3fa5d8b60266a4edaff3054eba565ec69e12ca9a1c1c" Jan 30 13:49:48 crc kubenswrapper[4793]: E0130 13:49:48.397984 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:49:49 crc kubenswrapper[4793]: I0130 13:49:49.533589 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 13:49:49 crc kubenswrapper[4793]: I0130 13:49:49.621670 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 13:49:49 crc kubenswrapper[4793]: I0130 13:49:49.650174 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 13:49:49 crc kubenswrapper[4793]: I0130 13:49:49.923357 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 13:49:49 crc kubenswrapper[4793]: I0130 13:49:49.982545 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 13:49:50 crc kubenswrapper[4793]: I0130 13:49:50.226966 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 13:49:50 crc kubenswrapper[4793]: I0130 13:49:50.768684 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 13:49:51 crc kubenswrapper[4793]: I0130 13:49:51.126663 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 13:49:51 crc kubenswrapper[4793]: I0130 13:49:51.305865 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 13:49:51 crc kubenswrapper[4793]: I0130 13:49:51.369817 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 13:49:51 crc kubenswrapper[4793]: I0130 13:49:51.528950 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 13:49:52 crc kubenswrapper[4793]: I0130 13:49:52.045464 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 13:49:52 crc kubenswrapper[4793]: I0130 13:49:52.159728 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 13:49:52 crc kubenswrapper[4793]: I0130 13:49:52.168481 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 13:49:52 crc kubenswrapper[4793]: I0130 13:49:52.287833 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 13:49:52 crc kubenswrapper[4793]: I0130 13:49:52.378532 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 13:49:52 crc kubenswrapper[4793]: I0130 13:49:52.470977 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 13:49:52 crc kubenswrapper[4793]: I0130 13:49:52.649876 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 13:49:53 crc kubenswrapper[4793]: I0130 13:49:53.002988 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 13:49:53 crc kubenswrapper[4793]: I0130 13:49:53.164957 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 13:49:53 crc kubenswrapper[4793]: I0130 13:49:53.262913 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 13:49:53 crc kubenswrapper[4793]: I0130 13:49:53.807706 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 13:49:53 crc kubenswrapper[4793]: I0130 13:49:53.819560 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 13:49:53 crc kubenswrapper[4793]: I0130 13:49:53.844551 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 13:49:53 crc kubenswrapper[4793]: I0130 13:49:53.863767 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 13:49:53 crc kubenswrapper[4793]: I0130 13:49:53.962703 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 13:49:54 crc kubenswrapper[4793]: I0130 13:49:54.096391 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 13:49:54 crc kubenswrapper[4793]: I0130 13:49:54.214178 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 13:49:54 crc kubenswrapper[4793]: I0130 13:49:54.215626 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 13:49:54 crc kubenswrapper[4793]: I0130 13:49:54.381196 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 13:49:55 crc kubenswrapper[4793]: I0130 13:49:55.055966 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 13:49:55 crc kubenswrapper[4793]: I0130 13:49:55.135486 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 13:49:55 crc kubenswrapper[4793]: I0130 13:49:55.307004 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 13:49:55 crc kubenswrapper[4793]: I0130 13:49:55.377178 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 13:49:55 crc kubenswrapper[4793]: I0130 13:49:55.394394 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:49:55 crc kubenswrapper[4793]: I0130 13:49:55.829290 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 13:49:55 crc kubenswrapper[4793]: I0130 13:49:55.881041 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 13:49:56 crc kubenswrapper[4793]: I0130 13:49:56.014957 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 13:49:56 crc kubenswrapper[4793]: I0130 13:49:56.052077 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 13:49:56 crc kubenswrapper[4793]: I0130 13:49:56.167739 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 13:49:56 crc kubenswrapper[4793]: I0130 13:49:56.210605 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 13:49:56 crc kubenswrapper[4793]: I0130 13:49:56.254533 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 13:49:56 crc kubenswrapper[4793]: I0130 13:49:56.589497 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 13:49:56 crc kubenswrapper[4793]: I0130 13:49:56.691578 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 13:49:56 crc kubenswrapper[4793]: I0130 13:49:56.905753 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 13:49:57 crc kubenswrapper[4793]: I0130 13:49:57.053042 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 13:49:57 crc kubenswrapper[4793]: I0130 13:49:57.057568 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 13:49:57 crc kubenswrapper[4793]: I0130 13:49:57.124668 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 13:49:57 crc kubenswrapper[4793]: I0130 13:49:57.328730 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 13:49:57 crc kubenswrapper[4793]: I0130 13:49:57.452202 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 13:49:57 crc kubenswrapper[4793]: I0130 13:49:57.789617 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 13:49:57 crc kubenswrapper[4793]: I0130 13:49:57.963297 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 13:49:57 crc kubenswrapper[4793]: I0130 13:49:57.985961 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.478836 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.489667 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.605595 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vqxml_10c05bcf-ffb2-4175-b323-067804ea3391/control-plane-machine-set-operator/0.log" Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.605642 4793 generic.go:334] "Generic (PLEG): container finished" podID="10c05bcf-ffb2-4175-b323-067804ea3391" containerID="212528f818185ed34c08690d1751b643e849af81e53c1991d8ea6a0b53521695" exitCode=1 Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.605683 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" event={"ID":"10c05bcf-ffb2-4175-b323-067804ea3391","Type":"ContainerDied","Data":"212528f818185ed34c08690d1751b643e849af81e53c1991d8ea6a0b53521695"} Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.606201 4793 scope.go:117] "RemoveContainer" containerID="212528f818185ed34c08690d1751b643e849af81e53c1991d8ea6a0b53521695" Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.684513 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.833877 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 13:49:58 crc kubenswrapper[4793]: I0130 13:49:58.909640 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.149521 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.250683 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.275034 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.314089 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.359553 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.377810 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.537108 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.616522 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vqxml_10c05bcf-ffb2-4175-b323-067804ea3391/control-plane-machine-set-operator/0.log" Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.616578 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vqxml" event={"ID":"10c05bcf-ffb2-4175-b323-067804ea3391","Type":"ContainerStarted","Data":"b05360624036ea9bd7a9da009b7bb2eef5dfd51728acb5243e4acc994916b054"} Jan 30 13:49:59 crc kubenswrapper[4793]: I0130 13:49:59.749124 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 13:50:00 crc kubenswrapper[4793]: I0130 13:50:00.054766 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 13:50:00 crc kubenswrapper[4793]: I0130 13:50:00.407074 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 13:50:00 crc kubenswrapper[4793]: I0130 13:50:00.408499 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 13:50:00 crc kubenswrapper[4793]: I0130 13:50:00.696022 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 13:50:00 crc kubenswrapper[4793]: I0130 13:50:00.818541 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 13:50:00 crc kubenswrapper[4793]: I0130 13:50:00.980470 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.173288 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.215156 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.369933 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.398272 4793 scope.go:117] "RemoveContainer" containerID="63006967c118b34959cd3fa5d8b60266a4edaff3054eba565ec69e12ca9a1c1c" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.398566 4793 scope.go:117] "RemoveContainer" containerID="598c516de85492fefd3748d7d01332587ed76f8169020c39af19b1708e581d68" Jan 30 13:50:01 crc kubenswrapper[4793]: E0130 13:50:01.399076 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.428520 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.627608 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/2.log" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.627945 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerStarted","Data":"010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18"} Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.628393 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.630012 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zd5lq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.630078 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.757708 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.806488 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.924782 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 13:50:01 crc kubenswrapper[4793]: I0130 13:50:01.991667 4793 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.003243 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.030264 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.055474 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.166021 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.370612 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.399218 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.399247 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.403790 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1f103c53-b7d9-4380-8d74-173d7a2fafbf" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.405716 4793 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://a393268ebfc4150bb652a680cc053a55806d9cef1ed7d3ab4cdeee748f359c1f" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.405841 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.515420 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.635095 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/3.log" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.635631 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/2.log" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.635687 4793 generic.go:334] "Generic (PLEG): container finished" podID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerID="010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18" exitCode=1 Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.635827 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerDied","Data":"010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18"} Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.635869 4793 scope.go:117] "RemoveContainer" containerID="63006967c118b34959cd3fa5d8b60266a4edaff3054eba565ec69e12ca9a1c1c" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.636263 4793 scope.go:117] "RemoveContainer" containerID="010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18" Jan 30 13:50:02 crc kubenswrapper[4793]: E0130 13:50:02.636451 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.636626 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.636648 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.655114 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1f103c53-b7d9-4380-8d74-173d7a2fafbf" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.695792 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.728519 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.745215 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.780418 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 13:50:02 crc kubenswrapper[4793]: I0130 13:50:02.830076 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 13:50:03 crc kubenswrapper[4793]: I0130 13:50:03.252369 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 13:50:03 crc kubenswrapper[4793]: I0130 13:50:03.386575 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 13:50:03 crc kubenswrapper[4793]: I0130 13:50:03.394251 4793 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 13:50:03 crc kubenswrapper[4793]: I0130 13:50:03.546654 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 13:50:03 crc kubenswrapper[4793]: I0130 13:50:03.642568 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/3.log" Jan 30 13:50:03 crc kubenswrapper[4793]: I0130 13:50:03.643290 4793 scope.go:117] "RemoveContainer" containerID="010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18" Jan 30 13:50:03 crc kubenswrapper[4793]: E0130 13:50:03.643543 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:50:03 crc kubenswrapper[4793]: I0130 13:50:03.980970 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 13:50:04 crc kubenswrapper[4793]: I0130 13:50:04.061741 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 13:50:04 crc kubenswrapper[4793]: I0130 13:50:04.271673 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 13:50:04 crc kubenswrapper[4793]: I0130 13:50:04.473225 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 13:50:04 crc kubenswrapper[4793]: I0130 13:50:04.582635 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.124011 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.147316 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.362614 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.654889 4793 generic.go:334] "Generic (PLEG): container finished" podID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerID="6dc475d841ad7ccf7189817179fb736d89bc63690c21b60627e67fc5789a286b" exitCode=0 Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.654933 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" event={"ID":"bb9452c1-1f30-4fd9-aaf3-49fd8266818d","Type":"ContainerDied","Data":"6dc475d841ad7ccf7189817179fb736d89bc63690c21b60627e67fc5789a286b"} Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.655397 4793 scope.go:117] "RemoveContainer" containerID="6dc475d841ad7ccf7189817179fb736d89bc63690c21b60627e67fc5789a286b" Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.658699 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.746004 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:50:05 crc kubenswrapper[4793]: I0130 13:50:05.746861 4793 scope.go:117] "RemoveContainer" containerID="010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18" Jan 30 13:50:05 crc kubenswrapper[4793]: E0130 13:50:05.747110 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.110547 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.268353 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.403613 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.661776 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" event={"ID":"bb9452c1-1f30-4fd9-aaf3-49fd8266818d","Type":"ContainerStarted","Data":"c2225bef18ba9d885e8be28ad827b878179ba99db76f684234a752622dd76290"} Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.662102 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.665290 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.689173 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.727346 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.896032 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 13:50:06 crc kubenswrapper[4793]: I0130 13:50:06.918312 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 13:50:07 crc kubenswrapper[4793]: I0130 13:50:07.381568 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 13:50:07 crc kubenswrapper[4793]: I0130 13:50:07.408399 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 13:50:07 crc kubenswrapper[4793]: I0130 13:50:07.695827 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 13:50:07 crc kubenswrapper[4793]: I0130 13:50:07.781691 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 13:50:07 crc kubenswrapper[4793]: I0130 13:50:07.790953 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 13:50:07 crc kubenswrapper[4793]: I0130 13:50:07.868687 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 13:50:07 crc kubenswrapper[4793]: I0130 13:50:07.872325 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 13:50:07 crc kubenswrapper[4793]: I0130 13:50:07.924949 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 13:50:08 crc kubenswrapper[4793]: I0130 13:50:08.067503 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 13:50:08 crc kubenswrapper[4793]: I0130 13:50:08.101856 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 13:50:08 crc kubenswrapper[4793]: I0130 13:50:08.107640 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:50:08 crc kubenswrapper[4793]: I0130 13:50:08.646659 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 13:50:09 crc kubenswrapper[4793]: I0130 13:50:09.241736 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 13:50:09 crc kubenswrapper[4793]: I0130 13:50:09.294144 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:50:09 crc kubenswrapper[4793]: I0130 13:50:09.443147 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 13:50:09 crc kubenswrapper[4793]: I0130 13:50:09.536712 4793 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 13:50:09 crc kubenswrapper[4793]: I0130 13:50:09.903730 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 13:50:09 crc kubenswrapper[4793]: I0130 13:50:09.991376 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.015781 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.129946 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.244620 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.257672 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.296153 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.341278 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.406958 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.447030 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.683094 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-56656f9798-h5zfs_7c31ba39-5ef3-458b-89c1-eb43adfa3d7f/machine-approver-controller/0.log" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.683911 4793 generic.go:334] "Generic (PLEG): container finished" podID="7c31ba39-5ef3-458b-89c1-eb43adfa3d7f" containerID="0da33b576395a991ab5923fecbb1f6438080aff6f085708f99e9123cfd200b10" exitCode=255 Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.683952 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" event={"ID":"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f","Type":"ContainerDied","Data":"0da33b576395a991ab5923fecbb1f6438080aff6f085708f99e9123cfd200b10"} Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.684954 4793 scope.go:117] "RemoveContainer" containerID="0da33b576395a991ab5923fecbb1f6438080aff6f085708f99e9123cfd200b10" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.814128 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.868713 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 13:50:10 crc kubenswrapper[4793]: I0130 13:50:10.972173 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.185486 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.326864 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.349731 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.431312 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.666478 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.691768 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-56656f9798-h5zfs_7c31ba39-5ef3-458b-89c1-eb43adfa3d7f/machine-approver-controller/0.log" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.692150 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5zfs" event={"ID":"7c31ba39-5ef3-458b-89c1-eb43adfa3d7f","Type":"ContainerStarted","Data":"0b700c53562ddd958f0820e4e1e832563a04eae702566772395e92ffa66383fc"} Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.699352 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.708555 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.717829 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.872754 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 13:50:11 crc kubenswrapper[4793]: I0130 13:50:11.934846 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 13:50:12 crc kubenswrapper[4793]: I0130 13:50:12.049507 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 13:50:12 crc kubenswrapper[4793]: I0130 13:50:12.097882 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 13:50:12 crc kubenswrapper[4793]: I0130 13:50:12.193547 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:50:12 crc kubenswrapper[4793]: I0130 13:50:12.413531 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:50:12 crc kubenswrapper[4793]: I0130 13:50:12.413600 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:50:12 crc kubenswrapper[4793]: I0130 13:50:12.598782 4793 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 13:50:12 crc kubenswrapper[4793]: I0130 13:50:12.844293 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 13:50:13 crc kubenswrapper[4793]: I0130 13:50:13.199757 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 13:50:13 crc kubenswrapper[4793]: I0130 13:50:13.541812 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 13:50:13 crc kubenswrapper[4793]: I0130 13:50:13.790043 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 13:50:13 crc kubenswrapper[4793]: I0130 13:50:13.855535 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 13:50:13 crc kubenswrapper[4793]: I0130 13:50:13.895707 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.340207 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.398429 4793 scope.go:117] "RemoveContainer" containerID="598c516de85492fefd3748d7d01332587ed76f8169020c39af19b1708e581d68" Jan 30 13:50:14 crc kubenswrapper[4793]: E0130 13:50:14.398649 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.450648 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.455125 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.817103 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.877750 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.924548 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.931956 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 13:50:14 crc kubenswrapper[4793]: I0130 13:50:14.965981 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.089030 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.181740 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.189411 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.203846 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.261327 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.284007 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.376916 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.426448 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.517478 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.617814 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.637624 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.763794 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.873169 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 13:50:15 crc kubenswrapper[4793]: I0130 13:50:15.961390 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 13:50:16 crc kubenswrapper[4793]: I0130 13:50:16.041069 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 13:50:16 crc kubenswrapper[4793]: I0130 13:50:16.253107 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 13:50:16 crc kubenswrapper[4793]: I0130 13:50:16.523637 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 13:50:16 crc kubenswrapper[4793]: I0130 13:50:16.673745 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 13:50:17 crc kubenswrapper[4793]: I0130 13:50:17.052946 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 13:50:17 crc kubenswrapper[4793]: I0130 13:50:17.938757 4793 scope.go:117] "RemoveContainer" containerID="010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18" Jan 30 13:50:17 crc kubenswrapper[4793]: I0130 13:50:17.939505 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 13:50:17 crc kubenswrapper[4793]: I0130 13:50:17.943211 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 13:50:17 crc kubenswrapper[4793]: E0130 13:50:17.943514 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:50:17 crc kubenswrapper[4793]: I0130 13:50:17.955282 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 13:50:17 crc kubenswrapper[4793]: I0130 13:50:17.955963 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:50:17 crc kubenswrapper[4793]: I0130 13:50:17.958385 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:50:17 crc kubenswrapper[4793]: I0130 13:50:17.964951 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 13:50:18 crc kubenswrapper[4793]: I0130 13:50:18.127441 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 13:50:18 crc kubenswrapper[4793]: I0130 13:50:18.234405 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 13:50:18 crc kubenswrapper[4793]: I0130 13:50:18.754619 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 13:50:19 crc kubenswrapper[4793]: I0130 13:50:19.114370 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 13:50:19 crc kubenswrapper[4793]: I0130 13:50:19.350658 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 13:50:19 crc kubenswrapper[4793]: I0130 13:50:19.424703 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 13:50:19 crc kubenswrapper[4793]: I0130 13:50:19.454332 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 13:50:19 crc kubenswrapper[4793]: I0130 13:50:19.553974 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 13:50:20 crc kubenswrapper[4793]: I0130 13:50:20.006446 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 13:50:20 crc kubenswrapper[4793]: I0130 13:50:20.246231 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 13:50:20 crc kubenswrapper[4793]: I0130 13:50:20.699486 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 13:50:20 crc kubenswrapper[4793]: I0130 13:50:20.984361 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 13:50:21 crc kubenswrapper[4793]: I0130 13:50:21.068824 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 13:50:21 crc kubenswrapper[4793]: I0130 13:50:21.213468 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 13:50:21 crc kubenswrapper[4793]: I0130 13:50:21.461453 4793 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 13:50:21 crc kubenswrapper[4793]: I0130 13:50:21.973157 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 13:50:22 crc kubenswrapper[4793]: I0130 13:50:22.041875 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:50:22 crc kubenswrapper[4793]: I0130 13:50:22.218208 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 13:50:22 crc kubenswrapper[4793]: I0130 13:50:22.329210 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 13:50:22 crc kubenswrapper[4793]: I0130 13:50:22.668189 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:50:24 crc kubenswrapper[4793]: I0130 13:50:24.393727 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 13:50:24 crc kubenswrapper[4793]: I0130 13:50:24.740833 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 13:50:24 crc kubenswrapper[4793]: I0130 13:50:24.807505 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 13:50:24 crc kubenswrapper[4793]: I0130 13:50:24.822034 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 13:50:24 crc kubenswrapper[4793]: I0130 13:50:24.849816 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 13:50:24 crc kubenswrapper[4793]: I0130 13:50:24.871275 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 13:50:25 crc kubenswrapper[4793]: I0130 13:50:25.082350 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 13:50:25 crc kubenswrapper[4793]: I0130 13:50:25.216737 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 13:50:25 crc kubenswrapper[4793]: I0130 13:50:25.400465 4793 scope.go:117] "RemoveContainer" containerID="598c516de85492fefd3748d7d01332587ed76f8169020c39af19b1708e581d68" Jan 30 13:50:25 crc kubenswrapper[4793]: E0130 13:50:25.400879 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:50:27 crc kubenswrapper[4793]: I0130 13:50:27.093592 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.360370 4793 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.362019 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6qnl2" podStartSLOduration=127.975115058 podStartE2EDuration="4m25.362003817s" podCreationTimestamp="2026-01-30 13:46:04 +0000 UTC" firstStartedPulling="2026-01-30 13:46:08.896746941 +0000 UTC m=+179.598095432" lastFinishedPulling="2026-01-30 13:48:26.28363566 +0000 UTC m=+316.984984191" observedRunningTime="2026-01-30 13:48:56.487145353 +0000 UTC m=+347.188493854" watchObservedRunningTime="2026-01-30 13:50:29.362003817 +0000 UTC m=+440.063352308" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.362565 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vn6kf" podStartSLOduration=106.21057781 podStartE2EDuration="4m22.362558482s" podCreationTimestamp="2026-01-30 13:46:07 +0000 UTC" firstStartedPulling="2026-01-30 13:46:08.912918036 +0000 UTC m=+179.614266537" lastFinishedPulling="2026-01-30 13:48:45.064898718 +0000 UTC m=+335.766247209" observedRunningTime="2026-01-30 13:48:56.308941991 +0000 UTC m=+347.010290492" watchObservedRunningTime="2026-01-30 13:50:29.362558482 +0000 UTC m=+440.063906973" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.362951 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=145.362944431 podStartE2EDuration="2m25.362944431s" podCreationTimestamp="2026-01-30 13:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:48:56.372216544 +0000 UTC m=+347.073565035" watchObservedRunningTime="2026-01-30 13:50:29.362944431 +0000 UTC m=+440.064292922" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.363112 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kvlgd" podStartSLOduration=138.765511015 podStartE2EDuration="4m23.363107585s" podCreationTimestamp="2026-01-30 13:46:06 +0000 UTC" firstStartedPulling="2026-01-30 13:46:08.833253934 +0000 UTC m=+179.534602425" lastFinishedPulling="2026-01-30 13:48:13.430850504 +0000 UTC m=+304.132198995" observedRunningTime="2026-01-30 13:48:56.422223518 +0000 UTC m=+347.123572019" watchObservedRunningTime="2026-01-30 13:50:29.363107585 +0000 UTC m=+440.064456086" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.363377 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mn7sx" podStartSLOduration=122.051402863 podStartE2EDuration="4m23.363373772s" podCreationTimestamp="2026-01-30 13:46:06 +0000 UTC" firstStartedPulling="2026-01-30 13:46:08.850816305 +0000 UTC m=+179.552164796" lastFinishedPulling="2026-01-30 13:48:30.162787214 +0000 UTC m=+320.864135705" observedRunningTime="2026-01-30 13:48:56.472938511 +0000 UTC m=+347.174287022" watchObservedRunningTime="2026-01-30 13:50:29.363373772 +0000 UTC m=+440.064722263" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.363708 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j4vzj" podStartSLOduration=106.556875383 podStartE2EDuration="4m25.36370382s" podCreationTimestamp="2026-01-30 13:46:04 +0000 UTC" firstStartedPulling="2026-01-30 13:46:07.756077379 +0000 UTC m=+178.457425870" lastFinishedPulling="2026-01-30 13:48:46.562905816 +0000 UTC m=+337.264254307" observedRunningTime="2026-01-30 13:48:56.389879553 +0000 UTC m=+347.091228054" watchObservedRunningTime="2026-01-30 13:50:29.36370382 +0000 UTC m=+440.065052311" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.364332 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g9t8x" podStartSLOduration=109.566719487 podStartE2EDuration="4m26.364327746s" podCreationTimestamp="2026-01-30 13:46:03 +0000 UTC" firstStartedPulling="2026-01-30 13:46:08.884207962 +0000 UTC m=+179.585556453" lastFinishedPulling="2026-01-30 13:48:45.681816201 +0000 UTC m=+336.383164712" observedRunningTime="2026-01-30 13:48:56.454123791 +0000 UTC m=+347.155472302" watchObservedRunningTime="2026-01-30 13:50:29.364327746 +0000 UTC m=+440.065676227" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.365158 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fxl8f" podStartSLOduration=115.082957565 podStartE2EDuration="4m22.365152277s" podCreationTimestamp="2026-01-30 13:46:07 +0000 UTC" firstStartedPulling="2026-01-30 13:46:09.922379222 +0000 UTC m=+180.623727703" lastFinishedPulling="2026-01-30 13:48:37.204573924 +0000 UTC m=+327.905922415" observedRunningTime="2026-01-30 13:48:56.4352179 +0000 UTC m=+347.136566421" watchObservedRunningTime="2026-01-30 13:50:29.365152277 +0000 UTC m=+440.066500768" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.366916 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/community-operators-9t46g"] Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.367081 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.367201 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74b476d486-lccjp","openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl"] Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.367455 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" podUID="11837748-ddd9-46ac-8f23-b0b77c511c39" containerName="route-controller-manager" containerID="cri-o://f20e6d0a2f5f4dcf508e55d955774b064398a8134d06063fb2bd0bca37715f3b" gracePeriod=30 Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.367478 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.367727 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="421ca100-bd7d-4a7b-9587-a77b5b928c5b" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.367748 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" podUID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerName="controller-manager" containerID="cri-o://c2225bef18ba9d885e8be28ad827b878179ba99db76f684234a752622dd76290" gracePeriod=30 Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.395528 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=93.395509127 podStartE2EDuration="1m33.395509127s" podCreationTimestamp="2026-01-30 13:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:50:29.391428534 +0000 UTC m=+440.092777015" watchObservedRunningTime="2026-01-30 13:50:29.395509127 +0000 UTC m=+440.096857628" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.421333 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.421837 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.426882 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.789924 4793 generic.go:334] "Generic (PLEG): container finished" podID="11837748-ddd9-46ac-8f23-b0b77c511c39" containerID="f20e6d0a2f5f4dcf508e55d955774b064398a8134d06063fb2bd0bca37715f3b" exitCode=0 Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.790022 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" event={"ID":"11837748-ddd9-46ac-8f23-b0b77c511c39","Type":"ContainerDied","Data":"f20e6d0a2f5f4dcf508e55d955774b064398a8134d06063fb2bd0bca37715f3b"} Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.792600 4793 generic.go:334] "Generic (PLEG): container finished" podID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerID="c2225bef18ba9d885e8be28ad827b878179ba99db76f684234a752622dd76290" exitCode=0 Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.792770 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" event={"ID":"bb9452c1-1f30-4fd9-aaf3-49fd8266818d","Type":"ContainerDied","Data":"c2225bef18ba9d885e8be28ad827b878179ba99db76f684234a752622dd76290"} Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.792910 4793 scope.go:117] "RemoveContainer" containerID="6dc475d841ad7ccf7189817179fb736d89bc63690c21b60627e67fc5789a286b" Jan 30 13:50:29 crc kubenswrapper[4793]: I0130 13:50:29.797140 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.198769 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.199426 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.228818 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk"] Jan 30 13:50:30 crc kubenswrapper[4793]: E0130 13:50:30.229083 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" containerName="installer" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.229097 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" containerName="installer" Jan 30 13:50:30 crc kubenswrapper[4793]: E0130 13:50:30.229107 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerName="controller-manager" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.229114 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerName="controller-manager" Jan 30 13:50:30 crc kubenswrapper[4793]: E0130 13:50:30.229128 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11837748-ddd9-46ac-8f23-b0b77c511c39" containerName="route-controller-manager" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.229134 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="11837748-ddd9-46ac-8f23-b0b77c511c39" containerName="route-controller-manager" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.229235 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerName="controller-manager" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.229251 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbfc4931-01b5-4cc0-a5f5-c3d4e42121a5" containerName="installer" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.229261 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerName="controller-manager" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.229269 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="11837748-ddd9-46ac-8f23-b0b77c511c39" containerName="route-controller-manager" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.229707 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.233995 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk"] Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354465 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-config\") pod \"11837748-ddd9-46ac-8f23-b0b77c511c39\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354529 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-client-ca\") pod \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354572 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-config\") pod \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354633 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94q78\" (UniqueName: \"kubernetes.io/projected/11837748-ddd9-46ac-8f23-b0b77c511c39-kube-api-access-94q78\") pod \"11837748-ddd9-46ac-8f23-b0b77c511c39\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354669 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clpjz\" (UniqueName: \"kubernetes.io/projected/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-kube-api-access-clpjz\") pod \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354690 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-client-ca\") pod \"11837748-ddd9-46ac-8f23-b0b77c511c39\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354713 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11837748-ddd9-46ac-8f23-b0b77c511c39-serving-cert\") pod \"11837748-ddd9-46ac-8f23-b0b77c511c39\" (UID: \"11837748-ddd9-46ac-8f23-b0b77c511c39\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354750 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-serving-cert\") pod \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354769 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-proxy-ca-bundles\") pod \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\" (UID: \"bb9452c1-1f30-4fd9-aaf3-49fd8266818d\") " Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354949 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-config\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.354986 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-client-ca\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.355028 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-serving-cert\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.355074 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9dpw\" (UniqueName: \"kubernetes.io/projected/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-kube-api-access-l9dpw\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.355615 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-client-ca" (OuterVolumeSpecName: "client-ca") pod "bb9452c1-1f30-4fd9-aaf3-49fd8266818d" (UID: "bb9452c1-1f30-4fd9-aaf3-49fd8266818d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.355694 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-config" (OuterVolumeSpecName: "config") pod "11837748-ddd9-46ac-8f23-b0b77c511c39" (UID: "11837748-ddd9-46ac-8f23-b0b77c511c39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.355704 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-config" (OuterVolumeSpecName: "config") pod "bb9452c1-1f30-4fd9-aaf3-49fd8266818d" (UID: "bb9452c1-1f30-4fd9-aaf3-49fd8266818d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.356607 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-client-ca" (OuterVolumeSpecName: "client-ca") pod "11837748-ddd9-46ac-8f23-b0b77c511c39" (UID: "11837748-ddd9-46ac-8f23-b0b77c511c39"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.356806 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bb9452c1-1f30-4fd9-aaf3-49fd8266818d" (UID: "bb9452c1-1f30-4fd9-aaf3-49fd8266818d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.361041 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bb9452c1-1f30-4fd9-aaf3-49fd8266818d" (UID: "bb9452c1-1f30-4fd9-aaf3-49fd8266818d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.361241 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-kube-api-access-clpjz" (OuterVolumeSpecName: "kube-api-access-clpjz") pod "bb9452c1-1f30-4fd9-aaf3-49fd8266818d" (UID: "bb9452c1-1f30-4fd9-aaf3-49fd8266818d"). InnerVolumeSpecName "kube-api-access-clpjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.361265 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11837748-ddd9-46ac-8f23-b0b77c511c39-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "11837748-ddd9-46ac-8f23-b0b77c511c39" (UID: "11837748-ddd9-46ac-8f23-b0b77c511c39"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.361719 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11837748-ddd9-46ac-8f23-b0b77c511c39-kube-api-access-94q78" (OuterVolumeSpecName: "kube-api-access-94q78") pod "11837748-ddd9-46ac-8f23-b0b77c511c39" (UID: "11837748-ddd9-46ac-8f23-b0b77c511c39"). InnerVolumeSpecName "kube-api-access-94q78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.405937 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="551044e9-867a-4307-a28c-ea34bab39473" path="/var/lib/kubelet/pods/551044e9-867a-4307-a28c-ea34bab39473/volumes" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456397 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-config\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456490 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-client-ca\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456537 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-serving-cert\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456567 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9dpw\" (UniqueName: \"kubernetes.io/projected/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-kube-api-access-l9dpw\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456631 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456644 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456672 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456684 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456695 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94q78\" (UniqueName: \"kubernetes.io/projected/11837748-ddd9-46ac-8f23-b0b77c511c39-kube-api-access-94q78\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456708 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clpjz\" (UniqueName: \"kubernetes.io/projected/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-kube-api-access-clpjz\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456717 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11837748-ddd9-46ac-8f23-b0b77c511c39-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456726 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11837748-ddd9-46ac-8f23-b0b77c511c39-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.456805 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb9452c1-1f30-4fd9-aaf3-49fd8266818d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.458545 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-client-ca\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.459004 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-config\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.464554 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-serving-cert\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.474417 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9dpw\" (UniqueName: \"kubernetes.io/projected/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-kube-api-access-l9dpw\") pod \"route-controller-manager-6678f655b-mzmfk\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.554373 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.738452 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk"] Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.799338 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" event={"ID":"11837748-ddd9-46ac-8f23-b0b77c511c39","Type":"ContainerDied","Data":"7dc9d90c1797415bdef39e7d33ab7879a133a25249498487ec03f24fae4459fc"} Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.799393 4793 scope.go:117] "RemoveContainer" containerID="f20e6d0a2f5f4dcf508e55d955774b064398a8134d06063fb2bd0bca37715f3b" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.799484 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.802747 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" event={"ID":"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4","Type":"ContainerStarted","Data":"a468e3c27d0a2cd913ba2f2058976b9b7319433f6282b4c4fb42aa2a1b0b5981"} Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.805715 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" event={"ID":"bb9452c1-1f30-4fd9-aaf3-49fd8266818d","Type":"ContainerDied","Data":"a76af574ae39e77263355b1e3c87d747ab2f9d1604f79be4a37d4e9cca505251"} Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.805956 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74b476d486-lccjp" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.807857 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.829173 4793 scope.go:117] "RemoveContainer" containerID="c2225bef18ba9d885e8be28ad827b878179ba99db76f684234a752622dd76290" Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.845883 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl"] Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.848871 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-674655ccb6-8dlkl"] Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.860744 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74b476d486-lccjp"] Jan 30 13:50:30 crc kubenswrapper[4793]: I0130 13:50:30.865073 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-74b476d486-lccjp"] Jan 30 13:50:31 crc kubenswrapper[4793]: I0130 13:50:31.813936 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" event={"ID":"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4","Type":"ContainerStarted","Data":"428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d"} Jan 30 13:50:31 crc kubenswrapper[4793]: I0130 13:50:31.814490 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:31 crc kubenswrapper[4793]: I0130 13:50:31.820519 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:31 crc kubenswrapper[4793]: I0130 13:50:31.863551 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" podStartSLOduration=10.863534741 podStartE2EDuration="10.863534741s" podCreationTimestamp="2026-01-30 13:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:50:31.832633077 +0000 UTC m=+442.533981568" watchObservedRunningTime="2026-01-30 13:50:31.863534741 +0000 UTC m=+442.564883232" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.386604 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-678d9b98d-rdzsn"] Jan 30 13:50:32 crc kubenswrapper[4793]: E0130 13:50:32.387329 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerName="controller-manager" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.387565 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" containerName="controller-manager" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.388639 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.393431 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.393775 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.393993 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.394740 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.395177 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.395512 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.401795 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.409386 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11837748-ddd9-46ac-8f23-b0b77c511c39" path="/var/lib/kubelet/pods/11837748-ddd9-46ac-8f23-b0b77c511c39/volumes" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.409986 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9452c1-1f30-4fd9-aaf3-49fd8266818d" path="/var/lib/kubelet/pods/bb9452c1-1f30-4fd9-aaf3-49fd8266818d/volumes" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.410711 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-678d9b98d-rdzsn"] Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.481652 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtl44\" (UniqueName: \"kubernetes.io/projected/75d0c552-96c4-4117-81ac-2b5a0007db12-kube-api-access-mtl44\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.481741 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-client-ca\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.481781 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75d0c552-96c4-4117-81ac-2b5a0007db12-serving-cert\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.481813 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-config\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.481831 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-proxy-ca-bundles\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.583207 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-config\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.583282 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-proxy-ca-bundles\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.583342 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtl44\" (UniqueName: \"kubernetes.io/projected/75d0c552-96c4-4117-81ac-2b5a0007db12-kube-api-access-mtl44\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.583376 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-client-ca\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.583421 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75d0c552-96c4-4117-81ac-2b5a0007db12-serving-cert\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.584608 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-client-ca\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.584721 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-config\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.585180 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-proxy-ca-bundles\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.589269 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75d0c552-96c4-4117-81ac-2b5a0007db12-serving-cert\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.604476 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtl44\" (UniqueName: \"kubernetes.io/projected/75d0c552-96c4-4117-81ac-2b5a0007db12-kube-api-access-mtl44\") pod \"controller-manager-678d9b98d-rdzsn\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.721821 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:32 crc kubenswrapper[4793]: I0130 13:50:32.908951 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-678d9b98d-rdzsn"] Jan 30 13:50:33 crc kubenswrapper[4793]: I0130 13:50:33.398728 4793 scope.go:117] "RemoveContainer" containerID="010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18" Jan 30 13:50:33 crc kubenswrapper[4793]: E0130 13:50:33.399389 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-zd5lq_openshift-marketplace(ee8452f4-fe2b-44d0-a26a-f7171e108fc9)\"" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" Jan 30 13:50:33 crc kubenswrapper[4793]: I0130 13:50:33.695257 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 13:50:33 crc kubenswrapper[4793]: I0130 13:50:33.830456 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" event={"ID":"75d0c552-96c4-4117-81ac-2b5a0007db12","Type":"ContainerStarted","Data":"abfdf91a9caa3ef9ef94ef207277a715338726c7d1101068e1fea87caabe98c1"} Jan 30 13:50:33 crc kubenswrapper[4793]: I0130 13:50:33.830517 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" event={"ID":"75d0c552-96c4-4117-81ac-2b5a0007db12","Type":"ContainerStarted","Data":"47aac0b2bf64b7e243b79435312f754a331791df342df5adc5c356c115ed01e4"} Jan 30 13:50:33 crc kubenswrapper[4793]: I0130 13:50:33.831178 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:33 crc kubenswrapper[4793]: I0130 13:50:33.836724 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:33 crc kubenswrapper[4793]: I0130 13:50:33.851071 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" podStartSLOduration=12.851028049 podStartE2EDuration="12.851028049s" podCreationTimestamp="2026-01-30 13:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:50:33.845629712 +0000 UTC m=+444.546978203" watchObservedRunningTime="2026-01-30 13:50:33.851028049 +0000 UTC m=+444.552376540" Jan 30 13:50:36 crc kubenswrapper[4793]: I0130 13:50:36.961539 4793 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 13:50:36 crc kubenswrapper[4793]: I0130 13:50:36.963254 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205" gracePeriod=5 Jan 30 13:50:37 crc kubenswrapper[4793]: I0130 13:50:37.398597 4793 scope.go:117] "RemoveContainer" containerID="598c516de85492fefd3748d7d01332587ed76f8169020c39af19b1708e581d68" Jan 30 13:50:37 crc kubenswrapper[4793]: I0130 13:50:37.854134 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/3.log" Jan 30 13:50:37 crc kubenswrapper[4793]: I0130 13:50:37.854189 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"cee855567135ea7489148ef33099b1918e9db05d7b89d2d000c91a4eeef3da3c"} Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.413588 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.413928 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.413981 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.414631 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb80942b6e6f56f06d5a97a5c92cee45946524669b2d3f8777363114c1c78ea4"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.414694 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://eb80942b6e6f56f06d5a97a5c92cee45946524669b2d3f8777363114c1c78ea4" gracePeriod=600 Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.562301 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.562374 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.731128 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.731733 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.733296 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.733430 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.731265 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.731839 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.733384 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.733504 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.733527 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.734408 4793 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.734508 4793 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.734590 4793 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.734732 4793 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.740499 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.835675 4793 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.880832 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.880880 4793 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205" exitCode=137 Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.880949 4793 scope.go:117] "RemoveContainer" containerID="33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.880962 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.887112 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="eb80942b6e6f56f06d5a97a5c92cee45946524669b2d3f8777363114c1c78ea4" exitCode=0 Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.887149 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"eb80942b6e6f56f06d5a97a5c92cee45946524669b2d3f8777363114c1c78ea4"} Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.887187 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"da1bd3d911e39105fb6fe0014eb41a36c6a445fb3c02ca872cc47e861a75515a"} Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.910500 4793 scope.go:117] "RemoveContainer" containerID="33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205" Jan 30 13:50:42 crc kubenswrapper[4793]: E0130 13:50:42.910957 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205\": container with ID starting with 33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205 not found: ID does not exist" containerID="33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.911001 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205"} err="failed to get container status \"33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205\": rpc error: code = NotFound desc = could not find container \"33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205\": container with ID starting with 33b3b9565fffd5d05f73cba790a38f166e87d33fab9411dbc7955b35e8057205 not found: ID does not exist" Jan 30 13:50:42 crc kubenswrapper[4793]: I0130 13:50:42.911066 4793 scope.go:117] "RemoveContainer" containerID="3aadd845663c469e07aedacb0ed30254b44122eb250cac08c1050490b9864629" Jan 30 13:50:44 crc kubenswrapper[4793]: I0130 13:50:44.408928 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 13:50:44 crc kubenswrapper[4793]: I0130 13:50:44.409650 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 30 13:50:44 crc kubenswrapper[4793]: I0130 13:50:44.424033 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 13:50:44 crc kubenswrapper[4793]: I0130 13:50:44.424091 4793 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6a002524-7583-4bfa-b6eb-cb91eb1be877" Jan 30 13:50:44 crc kubenswrapper[4793]: I0130 13:50:44.430759 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 13:50:44 crc kubenswrapper[4793]: I0130 13:50:44.430815 4793 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6a002524-7583-4bfa-b6eb-cb91eb1be877" Jan 30 13:50:45 crc kubenswrapper[4793]: I0130 13:50:45.598133 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-678d9b98d-rdzsn"] Jan 30 13:50:45 crc kubenswrapper[4793]: I0130 13:50:45.598643 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" podUID="75d0c552-96c4-4117-81ac-2b5a0007db12" containerName="controller-manager" containerID="cri-o://abfdf91a9caa3ef9ef94ef207277a715338726c7d1101068e1fea87caabe98c1" gracePeriod=30 Jan 30 13:50:45 crc kubenswrapper[4793]: I0130 13:50:45.608898 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk"] Jan 30 13:50:45 crc kubenswrapper[4793]: I0130 13:50:45.609416 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" podUID="b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" containerName="route-controller-manager" containerID="cri-o://428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d" gracePeriod=30 Jan 30 13:50:45 crc kubenswrapper[4793]: I0130 13:50:45.906634 4793 generic.go:334] "Generic (PLEG): container finished" podID="75d0c552-96c4-4117-81ac-2b5a0007db12" containerID="abfdf91a9caa3ef9ef94ef207277a715338726c7d1101068e1fea87caabe98c1" exitCode=0 Jan 30 13:50:45 crc kubenswrapper[4793]: I0130 13:50:45.906928 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" event={"ID":"75d0c552-96c4-4117-81ac-2b5a0007db12","Type":"ContainerDied","Data":"abfdf91a9caa3ef9ef94ef207277a715338726c7d1101068e1fea87caabe98c1"} Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.399077 4793 scope.go:117] "RemoveContainer" containerID="010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.592997 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.676555 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.681681 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9dpw\" (UniqueName: \"kubernetes.io/projected/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-kube-api-access-l9dpw\") pod \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.681741 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-serving-cert\") pod \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.681768 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtl44\" (UniqueName: \"kubernetes.io/projected/75d0c552-96c4-4117-81ac-2b5a0007db12-kube-api-access-mtl44\") pod \"75d0c552-96c4-4117-81ac-2b5a0007db12\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.681794 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-client-ca\") pod \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.681810 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75d0c552-96c4-4117-81ac-2b5a0007db12-serving-cert\") pod \"75d0c552-96c4-4117-81ac-2b5a0007db12\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.682491 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-client-ca" (OuterVolumeSpecName: "client-ca") pod "b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" (UID: "b180bba6-6ae1-4a1d-a8db-0a0bb11134f4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.681840 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-config\") pod \"75d0c552-96c4-4117-81ac-2b5a0007db12\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.682879 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-proxy-ca-bundles\") pod \"75d0c552-96c4-4117-81ac-2b5a0007db12\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.682907 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-config\") pod \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\" (UID: \"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.682926 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-client-ca\") pod \"75d0c552-96c4-4117-81ac-2b5a0007db12\" (UID: \"75d0c552-96c4-4117-81ac-2b5a0007db12\") " Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.683082 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.683627 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "75d0c552-96c4-4117-81ac-2b5a0007db12" (UID: "75d0c552-96c4-4117-81ac-2b5a0007db12"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.683642 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-client-ca" (OuterVolumeSpecName: "client-ca") pod "75d0c552-96c4-4117-81ac-2b5a0007db12" (UID: "75d0c552-96c4-4117-81ac-2b5a0007db12"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.684291 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-config" (OuterVolumeSpecName: "config") pod "b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" (UID: "b180bba6-6ae1-4a1d-a8db-0a0bb11134f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.684456 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-config" (OuterVolumeSpecName: "config") pod "75d0c552-96c4-4117-81ac-2b5a0007db12" (UID: "75d0c552-96c4-4117-81ac-2b5a0007db12"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.701849 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d0c552-96c4-4117-81ac-2b5a0007db12-kube-api-access-mtl44" (OuterVolumeSpecName: "kube-api-access-mtl44") pod "75d0c552-96c4-4117-81ac-2b5a0007db12" (UID: "75d0c552-96c4-4117-81ac-2b5a0007db12"). InnerVolumeSpecName "kube-api-access-mtl44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.701978 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-kube-api-access-l9dpw" (OuterVolumeSpecName: "kube-api-access-l9dpw") pod "b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" (UID: "b180bba6-6ae1-4a1d-a8db-0a0bb11134f4"). InnerVolumeSpecName "kube-api-access-l9dpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.702830 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" (UID: "b180bba6-6ae1-4a1d-a8db-0a0bb11134f4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.704508 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d0c552-96c4-4117-81ac-2b5a0007db12-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "75d0c552-96c4-4117-81ac-2b5a0007db12" (UID: "75d0c552-96c4-4117-81ac-2b5a0007db12"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.784412 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.784447 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtl44\" (UniqueName: \"kubernetes.io/projected/75d0c552-96c4-4117-81ac-2b5a0007db12-kube-api-access-mtl44\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.784458 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75d0c552-96c4-4117-81ac-2b5a0007db12-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.784466 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.784474 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.784484 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.784492 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75d0c552-96c4-4117-81ac-2b5a0007db12-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.784502 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9dpw\" (UniqueName: \"kubernetes.io/projected/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4-kube-api-access-l9dpw\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.913295 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/3.log" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.913636 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerStarted","Data":"12a6dc8d1fe12e66c88c1e9af34c91aecbf032c69850554757bd6c716f87e793"} Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.914172 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.915513 4793 generic.go:334] "Generic (PLEG): container finished" podID="b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" containerID="428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d" exitCode=0 Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.915592 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" event={"ID":"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4","Type":"ContainerDied","Data":"428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d"} Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.915615 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" event={"ID":"b180bba6-6ae1-4a1d-a8db-0a0bb11134f4","Type":"ContainerDied","Data":"a468e3c27d0a2cd913ba2f2058976b9b7319433f6282b4c4fb42aa2a1b0b5981"} Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.915634 4793 scope.go:117] "RemoveContainer" containerID="428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.915690 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zd5lq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.915724 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.915893 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.921868 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" event={"ID":"75d0c552-96c4-4117-81ac-2b5a0007db12","Type":"ContainerDied","Data":"47aac0b2bf64b7e243b79435312f754a331791df342df5adc5c356c115ed01e4"} Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.921907 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678d9b98d-rdzsn" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.948751 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk"] Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.952012 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6678f655b-mzmfk"] Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.956461 4793 scope.go:117] "RemoveContainer" containerID="428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d" Jan 30 13:50:46 crc kubenswrapper[4793]: E0130 13:50:46.957631 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d\": container with ID starting with 428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d not found: ID does not exist" containerID="428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.957672 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d"} err="failed to get container status \"428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d\": rpc error: code = NotFound desc = could not find container \"428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d\": container with ID starting with 428b50863313b9e3f32fd853e023dc1ed93728d999e234c092eb9a819496072d not found: ID does not exist" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.957699 4793 scope.go:117] "RemoveContainer" containerID="abfdf91a9caa3ef9ef94ef207277a715338726c7d1101068e1fea87caabe98c1" Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.962269 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-678d9b98d-rdzsn"] Jan 30 13:50:46 crc kubenswrapper[4793]: I0130 13:50:46.967847 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-678d9b98d-rdzsn"] Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.406272 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md"] Jan 30 13:50:47 crc kubenswrapper[4793]: E0130 13:50:47.407595 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d0c552-96c4-4117-81ac-2b5a0007db12" containerName="controller-manager" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.407625 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d0c552-96c4-4117-81ac-2b5a0007db12" containerName="controller-manager" Jan 30 13:50:47 crc kubenswrapper[4793]: E0130 13:50:47.407649 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.407661 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 13:50:47 crc kubenswrapper[4793]: E0130 13:50:47.407685 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" containerName="route-controller-manager" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.407696 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" containerName="route-controller-manager" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.408561 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.408603 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" containerName="route-controller-manager" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.408616 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d0c552-96c4-4117-81ac-2b5a0007db12" containerName="controller-manager" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.409980 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.419139 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz"] Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.420544 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.420701 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.420865 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.421157 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.421325 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.421415 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.427358 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.429606 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.429833 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.429970 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.430210 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.430485 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.430694 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.441308 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.451969 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md"] Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.455268 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz"] Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493235 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-config\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493313 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-proxy-ca-bundles\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493491 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-config\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493579 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-client-ca\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493628 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn6jq\" (UniqueName: \"kubernetes.io/projected/eee2ee98-2b55-47c1-981f-dd0898b2bf63-kube-api-access-gn6jq\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493704 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr2l2\" (UniqueName: \"kubernetes.io/projected/46946b58-1b0f-4def-8b3a-ea762612980a-kube-api-access-xr2l2\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493771 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-client-ca\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493865 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46946b58-1b0f-4def-8b3a-ea762612980a-serving-cert\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.493890 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eee2ee98-2b55-47c1-981f-dd0898b2bf63-serving-cert\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.594955 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-config\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.595072 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-client-ca\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.595112 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn6jq\" (UniqueName: \"kubernetes.io/projected/eee2ee98-2b55-47c1-981f-dd0898b2bf63-kube-api-access-gn6jq\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.595135 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr2l2\" (UniqueName: \"kubernetes.io/projected/46946b58-1b0f-4def-8b3a-ea762612980a-kube-api-access-xr2l2\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.595157 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-client-ca\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.595204 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46946b58-1b0f-4def-8b3a-ea762612980a-serving-cert\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.595228 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eee2ee98-2b55-47c1-981f-dd0898b2bf63-serving-cert\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.595313 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-config\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.596307 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-client-ca\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.596798 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-proxy-ca-bundles\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.597088 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-client-ca\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.597264 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-config\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.597760 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-config\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.600609 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46946b58-1b0f-4def-8b3a-ea762612980a-serving-cert\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.601172 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-proxy-ca-bundles\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.602866 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eee2ee98-2b55-47c1-981f-dd0898b2bf63-serving-cert\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.618702 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn6jq\" (UniqueName: \"kubernetes.io/projected/eee2ee98-2b55-47c1-981f-dd0898b2bf63-kube-api-access-gn6jq\") pod \"route-controller-manager-f6cb68995-x72md\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.618814 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr2l2\" (UniqueName: \"kubernetes.io/projected/46946b58-1b0f-4def-8b3a-ea762612980a-kube-api-access-xr2l2\") pod \"controller-manager-5cfb6886b5-4d5dz\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.752101 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.771953 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:47 crc kubenswrapper[4793]: I0130 13:50:47.948837 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.063772 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz"] Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.112228 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md"] Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.404654 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75d0c552-96c4-4117-81ac-2b5a0007db12" path="/var/lib/kubelet/pods/75d0c552-96c4-4117-81ac-2b5a0007db12/volumes" Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.405681 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b180bba6-6ae1-4a1d-a8db-0a0bb11134f4" path="/var/lib/kubelet/pods/b180bba6-6ae1-4a1d-a8db-0a0bb11134f4/volumes" Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.958478 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" event={"ID":"eee2ee98-2b55-47c1-981f-dd0898b2bf63","Type":"ContainerStarted","Data":"bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e"} Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.959610 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" event={"ID":"eee2ee98-2b55-47c1-981f-dd0898b2bf63","Type":"ContainerStarted","Data":"02125fb06afb5a468ca285614473441b8b7036e21ea110c4b7a0074fd7543686"} Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.960013 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.960135 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" event={"ID":"46946b58-1b0f-4def-8b3a-ea762612980a","Type":"ContainerStarted","Data":"871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d"} Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.960219 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" event={"ID":"46946b58-1b0f-4def-8b3a-ea762612980a","Type":"ContainerStarted","Data":"694d456dc5c8634cc2a3e1c82c98508ef3805387920ec823e200ed8493fd208d"} Jan 30 13:50:48 crc kubenswrapper[4793]: I0130 13:50:48.968194 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:50:49 crc kubenswrapper[4793]: I0130 13:50:49.005624 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" podStartSLOduration=4.00560593 podStartE2EDuration="4.00560593s" podCreationTimestamp="2026-01-30 13:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:50:49.003405312 +0000 UTC m=+459.704753803" watchObservedRunningTime="2026-01-30 13:50:49.00560593 +0000 UTC m=+459.706954421" Jan 30 13:50:49 crc kubenswrapper[4793]: I0130 13:50:49.985094 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" podStartSLOduration=4.985073391 podStartE2EDuration="4.985073391s" podCreationTimestamp="2026-01-30 13:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:50:49.981853467 +0000 UTC m=+460.683201958" watchObservedRunningTime="2026-01-30 13:50:49.985073391 +0000 UTC m=+460.686421882" Jan 30 13:50:50 crc kubenswrapper[4793]: I0130 13:50:50.969840 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:50:50 crc kubenswrapper[4793]: I0130 13:50:50.975503 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:51:00 crc kubenswrapper[4793]: I0130 13:51:00.977015 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz"] Jan 30 13:51:00 crc kubenswrapper[4793]: I0130 13:51:00.979894 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" podUID="46946b58-1b0f-4def-8b3a-ea762612980a" containerName="controller-manager" containerID="cri-o://871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d" gracePeriod=30 Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.072498 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md"] Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.072726 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" podUID="eee2ee98-2b55-47c1-981f-dd0898b2bf63" containerName="route-controller-manager" containerID="cri-o://bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e" gracePeriod=30 Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.423512 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.579196 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eee2ee98-2b55-47c1-981f-dd0898b2bf63-serving-cert\") pod \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.579302 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn6jq\" (UniqueName: \"kubernetes.io/projected/eee2ee98-2b55-47c1-981f-dd0898b2bf63-kube-api-access-gn6jq\") pod \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.579334 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-client-ca\") pod \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.579357 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-config\") pod \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\" (UID: \"eee2ee98-2b55-47c1-981f-dd0898b2bf63\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.580117 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-client-ca" (OuterVolumeSpecName: "client-ca") pod "eee2ee98-2b55-47c1-981f-dd0898b2bf63" (UID: "eee2ee98-2b55-47c1-981f-dd0898b2bf63"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.580824 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-config" (OuterVolumeSpecName: "config") pod "eee2ee98-2b55-47c1-981f-dd0898b2bf63" (UID: "eee2ee98-2b55-47c1-981f-dd0898b2bf63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.584778 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eee2ee98-2b55-47c1-981f-dd0898b2bf63-kube-api-access-gn6jq" (OuterVolumeSpecName: "kube-api-access-gn6jq") pod "eee2ee98-2b55-47c1-981f-dd0898b2bf63" (UID: "eee2ee98-2b55-47c1-981f-dd0898b2bf63"). InnerVolumeSpecName "kube-api-access-gn6jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.595753 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee2ee98-2b55-47c1-981f-dd0898b2bf63-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eee2ee98-2b55-47c1-981f-dd0898b2bf63" (UID: "eee2ee98-2b55-47c1-981f-dd0898b2bf63"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.680852 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn6jq\" (UniqueName: \"kubernetes.io/projected/eee2ee98-2b55-47c1-981f-dd0898b2bf63-kube-api-access-gn6jq\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.680895 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.680904 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee2ee98-2b55-47c1-981f-dd0898b2bf63-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.680912 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eee2ee98-2b55-47c1-981f-dd0898b2bf63-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.874989 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.983712 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr2l2\" (UniqueName: \"kubernetes.io/projected/46946b58-1b0f-4def-8b3a-ea762612980a-kube-api-access-xr2l2\") pod \"46946b58-1b0f-4def-8b3a-ea762612980a\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.983774 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-proxy-ca-bundles\") pod \"46946b58-1b0f-4def-8b3a-ea762612980a\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.983796 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-client-ca\") pod \"46946b58-1b0f-4def-8b3a-ea762612980a\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.983924 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-config\") pod \"46946b58-1b0f-4def-8b3a-ea762612980a\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.983943 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46946b58-1b0f-4def-8b3a-ea762612980a-serving-cert\") pod \"46946b58-1b0f-4def-8b3a-ea762612980a\" (UID: \"46946b58-1b0f-4def-8b3a-ea762612980a\") " Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.984950 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-config" (OuterVolumeSpecName: "config") pod "46946b58-1b0f-4def-8b3a-ea762612980a" (UID: "46946b58-1b0f-4def-8b3a-ea762612980a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.985162 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-client-ca" (OuterVolumeSpecName: "client-ca") pod "46946b58-1b0f-4def-8b3a-ea762612980a" (UID: "46946b58-1b0f-4def-8b3a-ea762612980a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.985323 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "46946b58-1b0f-4def-8b3a-ea762612980a" (UID: "46946b58-1b0f-4def-8b3a-ea762612980a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.989269 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46946b58-1b0f-4def-8b3a-ea762612980a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "46946b58-1b0f-4def-8b3a-ea762612980a" (UID: "46946b58-1b0f-4def-8b3a-ea762612980a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:01 crc kubenswrapper[4793]: I0130 13:51:01.990334 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46946b58-1b0f-4def-8b3a-ea762612980a-kube-api-access-xr2l2" (OuterVolumeSpecName: "kube-api-access-xr2l2") pod "46946b58-1b0f-4def-8b3a-ea762612980a" (UID: "46946b58-1b0f-4def-8b3a-ea762612980a"). InnerVolumeSpecName "kube-api-access-xr2l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.026435 4793 generic.go:334] "Generic (PLEG): container finished" podID="46946b58-1b0f-4def-8b3a-ea762612980a" containerID="871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d" exitCode=0 Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.026506 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" event={"ID":"46946b58-1b0f-4def-8b3a-ea762612980a","Type":"ContainerDied","Data":"871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d"} Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.026576 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" event={"ID":"46946b58-1b0f-4def-8b3a-ea762612980a","Type":"ContainerDied","Data":"694d456dc5c8634cc2a3e1c82c98508ef3805387920ec823e200ed8493fd208d"} Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.026598 4793 scope.go:117] "RemoveContainer" containerID="871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.026609 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.028716 4793 generic.go:334] "Generic (PLEG): container finished" podID="eee2ee98-2b55-47c1-981f-dd0898b2bf63" containerID="bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e" exitCode=0 Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.028771 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.028746 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" event={"ID":"eee2ee98-2b55-47c1-981f-dd0898b2bf63","Type":"ContainerDied","Data":"bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e"} Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.029528 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md" event={"ID":"eee2ee98-2b55-47c1-981f-dd0898b2bf63","Type":"ContainerDied","Data":"02125fb06afb5a468ca285614473441b8b7036e21ea110c4b7a0074fd7543686"} Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.050975 4793 scope.go:117] "RemoveContainer" containerID="871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d" Jan 30 13:51:02 crc kubenswrapper[4793]: E0130 13:51:02.051814 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d\": container with ID starting with 871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d not found: ID does not exist" containerID="871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.051855 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d"} err="failed to get container status \"871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d\": rpc error: code = NotFound desc = could not find container \"871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d\": container with ID starting with 871ae14ceef2143c09690aaabe16ecda99edbdb7479a0ac06fefc444a1a66e0d not found: ID does not exist" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.051883 4793 scope.go:117] "RemoveContainer" containerID="bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.072766 4793 scope.go:117] "RemoveContainer" containerID="bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e" Jan 30 13:51:02 crc kubenswrapper[4793]: E0130 13:51:02.073248 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e\": container with ID starting with bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e not found: ID does not exist" containerID="bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.073346 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e"} err="failed to get container status \"bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e\": rpc error: code = NotFound desc = could not find container \"bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e\": container with ID starting with bb13566ca9c7e5183c413dfa329222b26a62daea6e5c55d886612a2973ace33e not found: ID does not exist" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.073455 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md"] Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.076467 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f6cb68995-x72md"] Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.085973 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.086021 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46946b58-1b0f-4def-8b3a-ea762612980a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.086066 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr2l2\" (UniqueName: \"kubernetes.io/projected/46946b58-1b0f-4def-8b3a-ea762612980a-kube-api-access-xr2l2\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.086083 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.086097 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46946b58-1b0f-4def-8b3a-ea762612980a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.086911 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz"] Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.103239 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5cfb6886b5-4d5dz"] Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.405605 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46946b58-1b0f-4def-8b3a-ea762612980a" path="/var/lib/kubelet/pods/46946b58-1b0f-4def-8b3a-ea762612980a/volumes" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.406210 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eee2ee98-2b55-47c1-981f-dd0898b2bf63" path="/var/lib/kubelet/pods/eee2ee98-2b55-47c1-981f-dd0898b2bf63/volumes" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.410136 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk"] Jan 30 13:51:02 crc kubenswrapper[4793]: E0130 13:51:02.410385 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46946b58-1b0f-4def-8b3a-ea762612980a" containerName="controller-manager" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.410404 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="46946b58-1b0f-4def-8b3a-ea762612980a" containerName="controller-manager" Jan 30 13:51:02 crc kubenswrapper[4793]: E0130 13:51:02.410415 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee2ee98-2b55-47c1-981f-dd0898b2bf63" containerName="route-controller-manager" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.410424 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee2ee98-2b55-47c1-981f-dd0898b2bf63" containerName="route-controller-manager" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.410557 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="46946b58-1b0f-4def-8b3a-ea762612980a" containerName="controller-manager" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.410573 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee2ee98-2b55-47c1-981f-dd0898b2bf63" containerName="route-controller-manager" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.411034 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.413408 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.413942 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.414181 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.414338 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.414482 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.414977 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.416554 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b74cd585c-nn75n"] Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.417395 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.423853 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b74cd585c-nn75n"] Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.425477 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.425680 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.426024 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.426179 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.426443 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.426725 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.439206 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.478882 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk"] Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.594350 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vvh9\" (UniqueName: \"kubernetes.io/projected/7a11e909-7bd4-4e65-bd54-61a34e199fc8-kube-api-access-6vvh9\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.594505 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knwtn\" (UniqueName: \"kubernetes.io/projected/1245271f-581f-4ad6-88a5-fc8df98d908d-kube-api-access-knwtn\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.594561 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-proxy-ca-bundles\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.594795 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-config\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.594882 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a11e909-7bd4-4e65-bd54-61a34e199fc8-serving-cert\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.594931 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-config\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.594988 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1245271f-581f-4ad6-88a5-fc8df98d908d-serving-cert\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.595007 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-client-ca\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.595061 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-client-ca\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695663 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1245271f-581f-4ad6-88a5-fc8df98d908d-serving-cert\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695715 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-client-ca\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695739 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-client-ca\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695760 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vvh9\" (UniqueName: \"kubernetes.io/projected/7a11e909-7bd4-4e65-bd54-61a34e199fc8-kube-api-access-6vvh9\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695788 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knwtn\" (UniqueName: \"kubernetes.io/projected/1245271f-581f-4ad6-88a5-fc8df98d908d-kube-api-access-knwtn\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695808 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-proxy-ca-bundles\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695849 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-config\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695872 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a11e909-7bd4-4e65-bd54-61a34e199fc8-serving-cert\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.695895 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-config\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.696877 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-client-ca\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.696877 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-client-ca\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.697474 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-proxy-ca-bundles\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.697531 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-config\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.697963 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-config\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.702839 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a11e909-7bd4-4e65-bd54-61a34e199fc8-serving-cert\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.702893 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1245271f-581f-4ad6-88a5-fc8df98d908d-serving-cert\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.718914 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vvh9\" (UniqueName: \"kubernetes.io/projected/7a11e909-7bd4-4e65-bd54-61a34e199fc8-kube-api-access-6vvh9\") pod \"controller-manager-7b74cd585c-nn75n\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.728189 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knwtn\" (UniqueName: \"kubernetes.io/projected/1245271f-581f-4ad6-88a5-fc8df98d908d-kube-api-access-knwtn\") pod \"route-controller-manager-6bb46b769c-tlznk\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.733859 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.747912 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:02 crc kubenswrapper[4793]: I0130 13:51:02.972716 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b74cd585c-nn75n"] Jan 30 13:51:02 crc kubenswrapper[4793]: W0130 13:51:02.982879 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a11e909_7bd4_4e65_bd54_61a34e199fc8.slice/crio-73cc7ac92e261ad0293e3752e2085be90be9ad023065c187bd33f01950036c6f WatchSource:0}: Error finding container 73cc7ac92e261ad0293e3752e2085be90be9ad023065c187bd33f01950036c6f: Status 404 returned error can't find the container with id 73cc7ac92e261ad0293e3752e2085be90be9ad023065c187bd33f01950036c6f Jan 30 13:51:03 crc kubenswrapper[4793]: I0130 13:51:03.011734 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk"] Jan 30 13:51:03 crc kubenswrapper[4793]: W0130 13:51:03.016143 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1245271f_581f_4ad6_88a5_fc8df98d908d.slice/crio-46759adc3b7bb4c6ec47d6365be3d50922aace4685bdab03dae1a7603d72e695 WatchSource:0}: Error finding container 46759adc3b7bb4c6ec47d6365be3d50922aace4685bdab03dae1a7603d72e695: Status 404 returned error can't find the container with id 46759adc3b7bb4c6ec47d6365be3d50922aace4685bdab03dae1a7603d72e695 Jan 30 13:51:03 crc kubenswrapper[4793]: I0130 13:51:03.035977 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" event={"ID":"1245271f-581f-4ad6-88a5-fc8df98d908d","Type":"ContainerStarted","Data":"46759adc3b7bb4c6ec47d6365be3d50922aace4685bdab03dae1a7603d72e695"} Jan 30 13:51:03 crc kubenswrapper[4793]: I0130 13:51:03.039251 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" event={"ID":"7a11e909-7bd4-4e65-bd54-61a34e199fc8","Type":"ContainerStarted","Data":"73cc7ac92e261ad0293e3752e2085be90be9ad023065c187bd33f01950036c6f"} Jan 30 13:51:04 crc kubenswrapper[4793]: I0130 13:51:04.047659 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" event={"ID":"1245271f-581f-4ad6-88a5-fc8df98d908d","Type":"ContainerStarted","Data":"2a7c84a7d77a4a992aaa084de64dbea7ab714ae6261878fd4f6f7001e5a8a24d"} Jan 30 13:51:04 crc kubenswrapper[4793]: I0130 13:51:04.048007 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:04 crc kubenswrapper[4793]: I0130 13:51:04.050122 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" event={"ID":"7a11e909-7bd4-4e65-bd54-61a34e199fc8","Type":"ContainerStarted","Data":"e1188b4a585d96fdedf9930e72dec5ac8fd06f99633ce9ac5a9ab4c8d741f7be"} Jan 30 13:51:04 crc kubenswrapper[4793]: I0130 13:51:04.050520 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:04 crc kubenswrapper[4793]: I0130 13:51:04.054458 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:04 crc kubenswrapper[4793]: I0130 13:51:04.055219 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:04 crc kubenswrapper[4793]: I0130 13:51:04.084935 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" podStartSLOduration=3.084917673 podStartE2EDuration="3.084917673s" podCreationTimestamp="2026-01-30 13:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:51:04.066316196 +0000 UTC m=+474.767664697" watchObservedRunningTime="2026-01-30 13:51:04.084917673 +0000 UTC m=+474.786266154" Jan 30 13:51:04 crc kubenswrapper[4793]: I0130 13:51:04.102021 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" podStartSLOduration=4.10199949 podStartE2EDuration="4.10199949s" podCreationTimestamp="2026-01-30 13:51:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:51:04.100727617 +0000 UTC m=+474.802076128" watchObservedRunningTime="2026-01-30 13:51:04.10199949 +0000 UTC m=+474.803347981" Jan 30 13:51:11 crc kubenswrapper[4793]: I0130 13:51:11.790213 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4vzj"] Jan 30 13:51:11 crc kubenswrapper[4793]: I0130 13:51:11.790724 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j4vzj" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" containerName="registry-server" containerID="cri-o://bca1d232355315db4731f9a23c3d510cb5c3560c5a03542708615d5cdb216d6c" gracePeriod=2 Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.103765 4793 generic.go:334] "Generic (PLEG): container finished" podID="02ec4db2-0283-437a-999f-d50a10ab046c" containerID="bca1d232355315db4731f9a23c3d510cb5c3560c5a03542708615d5cdb216d6c" exitCode=0 Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.104160 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vzj" event={"ID":"02ec4db2-0283-437a-999f-d50a10ab046c","Type":"ContainerDied","Data":"bca1d232355315db4731f9a23c3d510cb5c3560c5a03542708615d5cdb216d6c"} Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.154914 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.312968 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-utilities\") pod \"02ec4db2-0283-437a-999f-d50a10ab046c\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.313127 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm6vk\" (UniqueName: \"kubernetes.io/projected/02ec4db2-0283-437a-999f-d50a10ab046c-kube-api-access-hm6vk\") pod \"02ec4db2-0283-437a-999f-d50a10ab046c\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.313158 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-catalog-content\") pod \"02ec4db2-0283-437a-999f-d50a10ab046c\" (UID: \"02ec4db2-0283-437a-999f-d50a10ab046c\") " Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.314222 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-utilities" (OuterVolumeSpecName: "utilities") pod "02ec4db2-0283-437a-999f-d50a10ab046c" (UID: "02ec4db2-0283-437a-999f-d50a10ab046c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.319718 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02ec4db2-0283-437a-999f-d50a10ab046c-kube-api-access-hm6vk" (OuterVolumeSpecName: "kube-api-access-hm6vk") pod "02ec4db2-0283-437a-999f-d50a10ab046c" (UID: "02ec4db2-0283-437a-999f-d50a10ab046c"). InnerVolumeSpecName "kube-api-access-hm6vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.359923 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02ec4db2-0283-437a-999f-d50a10ab046c" (UID: "02ec4db2-0283-437a-999f-d50a10ab046c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.391413 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn7sx"] Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.391684 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mn7sx" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerName="registry-server" containerID="cri-o://6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c" gracePeriod=2 Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.415927 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.415961 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm6vk\" (UniqueName: \"kubernetes.io/projected/02ec4db2-0283-437a-999f-d50a10ab046c-kube-api-access-hm6vk\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:12 crc kubenswrapper[4793]: I0130 13:51:12.415974 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02ec4db2-0283-437a-999f-d50a10ab046c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:12.735739 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:12.920851 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-utilities\") pod \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:12.920907 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn89t\" (UniqueName: \"kubernetes.io/projected/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-kube-api-access-mn89t\") pod \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:12.920941 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-catalog-content\") pod \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\" (UID: \"96451b9c-e42f-43ae-9f62-bc830fa1ad9d\") " Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:12.921622 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-utilities" (OuterVolumeSpecName: "utilities") pod "96451b9c-e42f-43ae-9f62-bc830fa1ad9d" (UID: "96451b9c-e42f-43ae-9f62-bc830fa1ad9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:12.922228 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:12.923539 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-kube-api-access-mn89t" (OuterVolumeSpecName: "kube-api-access-mn89t") pod "96451b9c-e42f-43ae-9f62-bc830fa1ad9d" (UID: "96451b9c-e42f-43ae-9f62-bc830fa1ad9d"). InnerVolumeSpecName "kube-api-access-mn89t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:12.942385 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96451b9c-e42f-43ae-9f62-bc830fa1ad9d" (UID: "96451b9c-e42f-43ae-9f62-bc830fa1ad9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.023266 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn89t\" (UniqueName: \"kubernetes.io/projected/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-kube-api-access-mn89t\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.023330 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96451b9c-e42f-43ae-9f62-bc830fa1ad9d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.113213 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vzj" event={"ID":"02ec4db2-0283-437a-999f-d50a10ab046c","Type":"ContainerDied","Data":"ee249470c28be7e643027b7d1d76ee1a880e2751bfa6c780b72800ea7daeb066"} Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.113232 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4vzj" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.113283 4793 scope.go:117] "RemoveContainer" containerID="bca1d232355315db4731f9a23c3d510cb5c3560c5a03542708615d5cdb216d6c" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.116176 4793 generic.go:334] "Generic (PLEG): container finished" podID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerID="6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c" exitCode=0 Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.116205 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn7sx" event={"ID":"96451b9c-e42f-43ae-9f62-bc830fa1ad9d","Type":"ContainerDied","Data":"6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c"} Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.116233 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn7sx" event={"ID":"96451b9c-e42f-43ae-9f62-bc830fa1ad9d","Type":"ContainerDied","Data":"097e24f55ac27743bd9630217aba68c9f9433798eb25d4a7ca41ee8c4336a653"} Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.116246 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mn7sx" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.134843 4793 scope.go:117] "RemoveContainer" containerID="b9519a38e06d14f0b9522f2ca7c944b5d849d5137311c5fba903cacfaefb9b67" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.146399 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4vzj"] Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.157691 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j4vzj"] Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.161258 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn7sx"] Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.167330 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn7sx"] Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.177910 4793 scope.go:117] "RemoveContainer" containerID="9d4a750d40d93b392b9501779e0e72734cfa6f671669f4891033addc84b52774" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.199013 4793 scope.go:117] "RemoveContainer" containerID="6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.221318 4793 scope.go:117] "RemoveContainer" containerID="7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.234658 4793 scope.go:117] "RemoveContainer" containerID="6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.254930 4793 scope.go:117] "RemoveContainer" containerID="6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c" Jan 30 13:51:13 crc kubenswrapper[4793]: E0130 13:51:13.255751 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c\": container with ID starting with 6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c not found: ID does not exist" containerID="6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.255800 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c"} err="failed to get container status \"6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c\": rpc error: code = NotFound desc = could not find container \"6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c\": container with ID starting with 6b42d22ad97af9e5d3b4405351c07909f11bf293db5ae48cef9695b925e1569c not found: ID does not exist" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.255831 4793 scope.go:117] "RemoveContainer" containerID="7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028" Jan 30 13:51:13 crc kubenswrapper[4793]: E0130 13:51:13.256277 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028\": container with ID starting with 7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028 not found: ID does not exist" containerID="7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.256298 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028"} err="failed to get container status \"7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028\": rpc error: code = NotFound desc = could not find container \"7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028\": container with ID starting with 7dddbf9598fdef9c737090e3d16d920f3d93e8c9b18562b7f98c77fe5bca4028 not found: ID does not exist" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.256312 4793 scope.go:117] "RemoveContainer" containerID="6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf" Jan 30 13:51:13 crc kubenswrapper[4793]: E0130 13:51:13.256571 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf\": container with ID starting with 6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf not found: ID does not exist" containerID="6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf" Jan 30 13:51:13 crc kubenswrapper[4793]: I0130 13:51:13.256646 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf"} err="failed to get container status \"6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf\": rpc error: code = NotFound desc = could not find container \"6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf\": container with ID starting with 6394036982cb2461e2b01f18e51ee1fcec5017f53f58f6380f19ec559dac59cf not found: ID does not exist" Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.408901 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" path="/var/lib/kubelet/pods/02ec4db2-0283-437a-999f-d50a10ab046c/volumes" Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.409649 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" path="/var/lib/kubelet/pods/96451b9c-e42f-43ae-9f62-bc830fa1ad9d/volumes" Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.593661 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fxl8f"] Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.593936 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fxl8f" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerName="registry-server" containerID="cri-o://7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087" gracePeriod=2 Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.929765 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.950699 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-catalog-content\") pod \"0005ba9f-0f70-4df4-b588-8e6f941fec61\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.950756 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-utilities\") pod \"0005ba9f-0f70-4df4-b588-8e6f941fec61\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.950800 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w4dd\" (UniqueName: \"kubernetes.io/projected/0005ba9f-0f70-4df4-b588-8e6f941fec61-kube-api-access-2w4dd\") pod \"0005ba9f-0f70-4df4-b588-8e6f941fec61\" (UID: \"0005ba9f-0f70-4df4-b588-8e6f941fec61\") " Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.951822 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-utilities" (OuterVolumeSpecName: "utilities") pod "0005ba9f-0f70-4df4-b588-8e6f941fec61" (UID: "0005ba9f-0f70-4df4-b588-8e6f941fec61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.954598 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:14 crc kubenswrapper[4793]: I0130 13:51:14.986357 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0005ba9f-0f70-4df4-b588-8e6f941fec61-kube-api-access-2w4dd" (OuterVolumeSpecName: "kube-api-access-2w4dd") pod "0005ba9f-0f70-4df4-b588-8e6f941fec61" (UID: "0005ba9f-0f70-4df4-b588-8e6f941fec61"). InnerVolumeSpecName "kube-api-access-2w4dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.055390 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w4dd\" (UniqueName: \"kubernetes.io/projected/0005ba9f-0f70-4df4-b588-8e6f941fec61-kube-api-access-2w4dd\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.103761 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0005ba9f-0f70-4df4-b588-8e6f941fec61" (UID: "0005ba9f-0f70-4df4-b588-8e6f941fec61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.156906 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0005ba9f-0f70-4df4-b588-8e6f941fec61-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.160846 4793 generic.go:334] "Generic (PLEG): container finished" podID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerID="7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087" exitCode=0 Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.160913 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxl8f" event={"ID":"0005ba9f-0f70-4df4-b588-8e6f941fec61","Type":"ContainerDied","Data":"7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087"} Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.160960 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fxl8f" event={"ID":"0005ba9f-0f70-4df4-b588-8e6f941fec61","Type":"ContainerDied","Data":"13f1368c8d56c2f3e8a8787fdd36533c727a2ee0ef9f036522e165e8dc981e1f"} Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.160980 4793 scope.go:117] "RemoveContainer" containerID="7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.161200 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fxl8f" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.180528 4793 scope.go:117] "RemoveContainer" containerID="0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.204028 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fxl8f"] Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.204104 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fxl8f"] Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.222613 4793 scope.go:117] "RemoveContainer" containerID="11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.238334 4793 scope.go:117] "RemoveContainer" containerID="7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087" Jan 30 13:51:15 crc kubenswrapper[4793]: E0130 13:51:15.238837 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087\": container with ID starting with 7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087 not found: ID does not exist" containerID="7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.238896 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087"} err="failed to get container status \"7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087\": rpc error: code = NotFound desc = could not find container \"7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087\": container with ID starting with 7c55490f1f89303a5c58c8f7eea1d8d18f1ffdb6ed33d815ec954847898cc087 not found: ID does not exist" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.238932 4793 scope.go:117] "RemoveContainer" containerID="0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d" Jan 30 13:51:15 crc kubenswrapper[4793]: E0130 13:51:15.239473 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d\": container with ID starting with 0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d not found: ID does not exist" containerID="0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.239499 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d"} err="failed to get container status \"0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d\": rpc error: code = NotFound desc = could not find container \"0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d\": container with ID starting with 0836c16745b8535e6efd1dac2ed4efb8bb1a350e6b55b230db1a08e5b9af3d8d not found: ID does not exist" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.239518 4793 scope.go:117] "RemoveContainer" containerID="11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e" Jan 30 13:51:15 crc kubenswrapper[4793]: E0130 13:51:15.239850 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e\": container with ID starting with 11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e not found: ID does not exist" containerID="11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e" Jan 30 13:51:15 crc kubenswrapper[4793]: I0130 13:51:15.239900 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e"} err="failed to get container status \"11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e\": rpc error: code = NotFound desc = could not find container \"11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e\": container with ID starting with 11350b3aced4eec2d1a863ac7bf211ef2cdbf115a92df231c8b8e84111986b2e not found: ID does not exist" Jan 30 13:51:16 crc kubenswrapper[4793]: I0130 13:51:16.404291 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" path="/var/lib/kubelet/pods/0005ba9f-0f70-4df4-b588-8e6f941fec61/volumes" Jan 30 13:51:20 crc kubenswrapper[4793]: I0130 13:51:20.956562 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b74cd585c-nn75n"] Jan 30 13:51:20 crc kubenswrapper[4793]: I0130 13:51:20.957011 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" podUID="7a11e909-7bd4-4e65-bd54-61a34e199fc8" containerName="controller-manager" containerID="cri-o://e1188b4a585d96fdedf9930e72dec5ac8fd06f99633ce9ac5a9ab4c8d741f7be" gracePeriod=30 Jan 30 13:51:20 crc kubenswrapper[4793]: I0130 13:51:20.978155 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk"] Jan 30 13:51:20 crc kubenswrapper[4793]: I0130 13:51:20.978629 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" podUID="1245271f-581f-4ad6-88a5-fc8df98d908d" containerName="route-controller-manager" containerID="cri-o://2a7c84a7d77a4a992aaa084de64dbea7ab714ae6261878fd4f6f7001e5a8a24d" gracePeriod=30 Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.195938 4793 generic.go:334] "Generic (PLEG): container finished" podID="1245271f-581f-4ad6-88a5-fc8df98d908d" containerID="2a7c84a7d77a4a992aaa084de64dbea7ab714ae6261878fd4f6f7001e5a8a24d" exitCode=0 Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.196028 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" event={"ID":"1245271f-581f-4ad6-88a5-fc8df98d908d","Type":"ContainerDied","Data":"2a7c84a7d77a4a992aaa084de64dbea7ab714ae6261878fd4f6f7001e5a8a24d"} Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.197776 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a11e909-7bd4-4e65-bd54-61a34e199fc8" containerID="e1188b4a585d96fdedf9930e72dec5ac8fd06f99633ce9ac5a9ab4c8d741f7be" exitCode=0 Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.197808 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" event={"ID":"7a11e909-7bd4-4e65-bd54-61a34e199fc8","Type":"ContainerDied","Data":"e1188b4a585d96fdedf9930e72dec5ac8fd06f99633ce9ac5a9ab4c8d741f7be"} Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.407269 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.587005 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a11e909-7bd4-4e65-bd54-61a34e199fc8-serving-cert\") pod \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.587038 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-proxy-ca-bundles\") pod \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.587271 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-config\") pod \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.587327 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vvh9\" (UniqueName: \"kubernetes.io/projected/7a11e909-7bd4-4e65-bd54-61a34e199fc8-kube-api-access-6vvh9\") pod \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.587343 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-client-ca\") pod \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\" (UID: \"7a11e909-7bd4-4e65-bd54-61a34e199fc8\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.590321 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-config" (OuterVolumeSpecName: "config") pod "7a11e909-7bd4-4e65-bd54-61a34e199fc8" (UID: "7a11e909-7bd4-4e65-bd54-61a34e199fc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.590856 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-client-ca" (OuterVolumeSpecName: "client-ca") pod "7a11e909-7bd4-4e65-bd54-61a34e199fc8" (UID: "7a11e909-7bd4-4e65-bd54-61a34e199fc8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.590848 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7a11e909-7bd4-4e65-bd54-61a34e199fc8" (UID: "7a11e909-7bd4-4e65-bd54-61a34e199fc8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.594804 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a11e909-7bd4-4e65-bd54-61a34e199fc8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7a11e909-7bd4-4e65-bd54-61a34e199fc8" (UID: "7a11e909-7bd4-4e65-bd54-61a34e199fc8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.598138 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a11e909-7bd4-4e65-bd54-61a34e199fc8-kube-api-access-6vvh9" (OuterVolumeSpecName: "kube-api-access-6vvh9") pod "7a11e909-7bd4-4e65-bd54-61a34e199fc8" (UID: "7a11e909-7bd4-4e65-bd54-61a34e199fc8"). InnerVolumeSpecName "kube-api-access-6vvh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.644503 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.688784 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1245271f-581f-4ad6-88a5-fc8df98d908d-serving-cert\") pod \"1245271f-581f-4ad6-88a5-fc8df98d908d\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.688846 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-config\") pod \"1245271f-581f-4ad6-88a5-fc8df98d908d\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.688880 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knwtn\" (UniqueName: \"kubernetes.io/projected/1245271f-581f-4ad6-88a5-fc8df98d908d-kube-api-access-knwtn\") pod \"1245271f-581f-4ad6-88a5-fc8df98d908d\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.688924 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-client-ca\") pod \"1245271f-581f-4ad6-88a5-fc8df98d908d\" (UID: \"1245271f-581f-4ad6-88a5-fc8df98d908d\") " Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.689068 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.689083 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.689092 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a11e909-7bd4-4e65-bd54-61a34e199fc8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.689101 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a11e909-7bd4-4e65-bd54-61a34e199fc8-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.689109 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vvh9\" (UniqueName: \"kubernetes.io/projected/7a11e909-7bd4-4e65-bd54-61a34e199fc8-kube-api-access-6vvh9\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.689732 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-client-ca" (OuterVolumeSpecName: "client-ca") pod "1245271f-581f-4ad6-88a5-fc8df98d908d" (UID: "1245271f-581f-4ad6-88a5-fc8df98d908d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.690456 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-config" (OuterVolumeSpecName: "config") pod "1245271f-581f-4ad6-88a5-fc8df98d908d" (UID: "1245271f-581f-4ad6-88a5-fc8df98d908d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.694123 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1245271f-581f-4ad6-88a5-fc8df98d908d-kube-api-access-knwtn" (OuterVolumeSpecName: "kube-api-access-knwtn") pod "1245271f-581f-4ad6-88a5-fc8df98d908d" (UID: "1245271f-581f-4ad6-88a5-fc8df98d908d"). InnerVolumeSpecName "kube-api-access-knwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.694837 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1245271f-581f-4ad6-88a5-fc8df98d908d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1245271f-581f-4ad6-88a5-fc8df98d908d" (UID: "1245271f-581f-4ad6-88a5-fc8df98d908d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.789725 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1245271f-581f-4ad6-88a5-fc8df98d908d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.789756 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.789766 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knwtn\" (UniqueName: \"kubernetes.io/projected/1245271f-581f-4ad6-88a5-fc8df98d908d-kube-api-access-knwtn\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:21 crc kubenswrapper[4793]: I0130 13:51:21.789774 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1245271f-581f-4ad6-88a5-fc8df98d908d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.204622 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" event={"ID":"1245271f-581f-4ad6-88a5-fc8df98d908d","Type":"ContainerDied","Data":"46759adc3b7bb4c6ec47d6365be3d50922aace4685bdab03dae1a7603d72e695"} Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.205517 4793 scope.go:117] "RemoveContainer" containerID="2a7c84a7d77a4a992aaa084de64dbea7ab714ae6261878fd4f6f7001e5a8a24d" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.205689 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.210425 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" event={"ID":"7a11e909-7bd4-4e65-bd54-61a34e199fc8","Type":"ContainerDied","Data":"73cc7ac92e261ad0293e3752e2085be90be9ad023065c187bd33f01950036c6f"} Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.210504 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74cd585c-nn75n" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.231817 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk"] Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.240110 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46b769c-tlznk"] Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.249099 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b74cd585c-nn75n"] Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.249443 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7b74cd585c-nn75n"] Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.406409 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1245271f-581f-4ad6-88a5-fc8df98d908d" path="/var/lib/kubelet/pods/1245271f-581f-4ad6-88a5-fc8df98d908d/volumes" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.407600 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a11e909-7bd4-4e65-bd54-61a34e199fc8" path="/var/lib/kubelet/pods/7a11e909-7bd4-4e65-bd54-61a34e199fc8/volumes" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425165 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b"] Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425392 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a11e909-7bd4-4e65-bd54-61a34e199fc8" containerName="controller-manager" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425409 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a11e909-7bd4-4e65-bd54-61a34e199fc8" containerName="controller-manager" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425420 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" containerName="extract-content" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425427 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" containerName="extract-content" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425438 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerName="extract-content" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425445 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerName="extract-content" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425453 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1245271f-581f-4ad6-88a5-fc8df98d908d" containerName="route-controller-manager" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425459 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1245271f-581f-4ad6-88a5-fc8df98d908d" containerName="route-controller-manager" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425469 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425476 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425487 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerName="extract-utilities" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425493 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerName="extract-utilities" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425504 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" containerName="extract-utilities" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425511 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" containerName="extract-utilities" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425688 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425701 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425710 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerName="extract-utilities" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425717 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerName="extract-utilities" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425725 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerName="extract-content" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425732 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerName="extract-content" Jan 30 13:51:22 crc kubenswrapper[4793]: E0130 13:51:22.425743 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425750 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425854 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="02ec4db2-0283-437a-999f-d50a10ab046c" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425866 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="96451b9c-e42f-43ae-9f62-bc830fa1ad9d" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425879 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0005ba9f-0f70-4df4-b588-8e6f941fec61" containerName="registry-server" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425889 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a11e909-7bd4-4e65-bd54-61a34e199fc8" containerName="controller-manager" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.425897 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1245271f-581f-4ad6-88a5-fc8df98d908d" containerName="route-controller-manager" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.426373 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.428879 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7494b498cc-pw58f"] Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.429576 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.431288 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.431501 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.431759 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.431916 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.432243 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.432375 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.432516 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.435222 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.435405 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.435936 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.436484 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.436730 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.443912 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7494b498cc-pw58f"] Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.444846 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.454457 4793 scope.go:117] "RemoveContainer" containerID="e1188b4a585d96fdedf9930e72dec5ac8fd06f99633ce9ac5a9ab4c8d741f7be" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.458987 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b"] Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.600033 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-client-ca\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.600427 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgfwf\" (UniqueName: \"kubernetes.io/projected/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-kube-api-access-pgfwf\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.600603 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-client-ca\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.600735 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-config\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.600849 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-serving-cert\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.600978 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-proxy-ca-bundles\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.601125 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331250ca-4896-4db5-9193-0bc4014543aa-serving-cert\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.601804 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj76j\" (UniqueName: \"kubernetes.io/projected/331250ca-4896-4db5-9193-0bc4014543aa-kube-api-access-jj76j\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.601945 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-config\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.702828 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-client-ca\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.702890 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgfwf\" (UniqueName: \"kubernetes.io/projected/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-kube-api-access-pgfwf\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.702928 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-client-ca\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.702969 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-config\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.702987 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-serving-cert\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.703018 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-proxy-ca-bundles\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.703038 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331250ca-4896-4db5-9193-0bc4014543aa-serving-cert\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.703088 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj76j\" (UniqueName: \"kubernetes.io/projected/331250ca-4896-4db5-9193-0bc4014543aa-kube-api-access-jj76j\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.703114 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-config\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.704174 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-client-ca\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.704389 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-config\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:22 crc kubenswrapper[4793]: I0130 13:51:22.705500 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-config\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.296322 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-client-ca\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.296680 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331250ca-4896-4db5-9193-0bc4014543aa-serving-cert\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.297896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-proxy-ca-bundles\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.298584 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-serving-cert\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.299363 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj76j\" (UniqueName: \"kubernetes.io/projected/331250ca-4896-4db5-9193-0bc4014543aa-kube-api-access-jj76j\") pod \"route-controller-manager-576768b7d7-jzc5b\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.299591 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgfwf\" (UniqueName: \"kubernetes.io/projected/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-kube-api-access-pgfwf\") pod \"controller-manager-7494b498cc-pw58f\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.356208 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.368485 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.580783 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b"] Jan 30 13:51:23 crc kubenswrapper[4793]: W0130 13:51:23.584371 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod331250ca_4896_4db5_9193_0bc4014543aa.slice/crio-2c5af4b42b4104017ffa38ef067d7472affee0ab8c8ae6656bb2b0ae3714df50 WatchSource:0}: Error finding container 2c5af4b42b4104017ffa38ef067d7472affee0ab8c8ae6656bb2b0ae3714df50: Status 404 returned error can't find the container with id 2c5af4b42b4104017ffa38ef067d7472affee0ab8c8ae6656bb2b0ae3714df50 Jan 30 13:51:23 crc kubenswrapper[4793]: I0130 13:51:23.624395 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7494b498cc-pw58f"] Jan 30 13:51:24 crc kubenswrapper[4793]: I0130 13:51:24.224005 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" event={"ID":"f6b5259e-bc29-45fb-b54a-9ea88b2c9455","Type":"ContainerStarted","Data":"5b10f9fb8b30b6886a920ccc357efc6e18c777018ff73968b7b489e1cd955680"} Jan 30 13:51:24 crc kubenswrapper[4793]: I0130 13:51:24.225501 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" event={"ID":"331250ca-4896-4db5-9193-0bc4014543aa","Type":"ContainerStarted","Data":"2c5af4b42b4104017ffa38ef067d7472affee0ab8c8ae6656bb2b0ae3714df50"} Jan 30 13:51:25 crc kubenswrapper[4793]: I0130 13:51:25.237811 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" event={"ID":"f6b5259e-bc29-45fb-b54a-9ea88b2c9455","Type":"ContainerStarted","Data":"5fd6a852dcf845aab42cc9dc74f3e773cd0bb7e06a1fd43cd8a36865b0b6cfb9"} Jan 30 13:51:25 crc kubenswrapper[4793]: I0130 13:51:25.239537 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:25 crc kubenswrapper[4793]: I0130 13:51:25.241143 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" event={"ID":"331250ca-4896-4db5-9193-0bc4014543aa","Type":"ContainerStarted","Data":"bfd8ea71474cacbd139e6aa78a900da8a61bbb4015df2e4c9fa0f4dcc58743f6"} Jan 30 13:51:25 crc kubenswrapper[4793]: I0130 13:51:25.241750 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:25 crc kubenswrapper[4793]: I0130 13:51:25.245955 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:25 crc kubenswrapper[4793]: I0130 13:51:25.249432 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:25 crc kubenswrapper[4793]: I0130 13:51:25.263953 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" podStartSLOduration=5.263935757 podStartE2EDuration="5.263935757s" podCreationTimestamp="2026-01-30 13:51:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:51:25.25755132 +0000 UTC m=+495.958899821" watchObservedRunningTime="2026-01-30 13:51:25.263935757 +0000 UTC m=+495.965284248" Jan 30 13:51:25 crc kubenswrapper[4793]: I0130 13:51:25.296911 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" podStartSLOduration=5.296891749 podStartE2EDuration="5.296891749s" podCreationTimestamp="2026-01-30 13:51:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:51:25.291849167 +0000 UTC m=+495.993197668" watchObservedRunningTime="2026-01-30 13:51:25.296891749 +0000 UTC m=+495.998240240" Jan 30 13:51:28 crc kubenswrapper[4793]: I0130 13:51:28.549429 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2mcj"] Jan 30 13:51:40 crc kubenswrapper[4793]: I0130 13:51:40.942544 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7494b498cc-pw58f"] Jan 30 13:51:40 crc kubenswrapper[4793]: I0130 13:51:40.944261 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" podUID="f6b5259e-bc29-45fb-b54a-9ea88b2c9455" containerName="controller-manager" containerID="cri-o://5fd6a852dcf845aab42cc9dc74f3e773cd0bb7e06a1fd43cd8a36865b0b6cfb9" gracePeriod=30 Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.049885 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b"] Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.050495 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" podUID="331250ca-4896-4db5-9193-0bc4014543aa" containerName="route-controller-manager" containerID="cri-o://bfd8ea71474cacbd139e6aa78a900da8a61bbb4015df2e4c9fa0f4dcc58743f6" gracePeriod=30 Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.330216 4793 generic.go:334] "Generic (PLEG): container finished" podID="331250ca-4896-4db5-9193-0bc4014543aa" containerID="bfd8ea71474cacbd139e6aa78a900da8a61bbb4015df2e4c9fa0f4dcc58743f6" exitCode=0 Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.330282 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" event={"ID":"331250ca-4896-4db5-9193-0bc4014543aa","Type":"ContainerDied","Data":"bfd8ea71474cacbd139e6aa78a900da8a61bbb4015df2e4c9fa0f4dcc58743f6"} Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.333222 4793 generic.go:334] "Generic (PLEG): container finished" podID="f6b5259e-bc29-45fb-b54a-9ea88b2c9455" containerID="5fd6a852dcf845aab42cc9dc74f3e773cd0bb7e06a1fd43cd8a36865b0b6cfb9" exitCode=0 Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.333274 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" event={"ID":"f6b5259e-bc29-45fb-b54a-9ea88b2c9455","Type":"ContainerDied","Data":"5fd6a852dcf845aab42cc9dc74f3e773cd0bb7e06a1fd43cd8a36865b0b6cfb9"} Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.804689 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.935825 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-serving-cert\") pod \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.935889 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-client-ca\") pod \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.935960 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgfwf\" (UniqueName: \"kubernetes.io/projected/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-kube-api-access-pgfwf\") pod \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.936012 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-config\") pod \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.936107 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-proxy-ca-bundles\") pod \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\" (UID: \"f6b5259e-bc29-45fb-b54a-9ea88b2c9455\") " Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.936665 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-client-ca" (OuterVolumeSpecName: "client-ca") pod "f6b5259e-bc29-45fb-b54a-9ea88b2c9455" (UID: "f6b5259e-bc29-45fb-b54a-9ea88b2c9455"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.936794 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-config" (OuterVolumeSpecName: "config") pod "f6b5259e-bc29-45fb-b54a-9ea88b2c9455" (UID: "f6b5259e-bc29-45fb-b54a-9ea88b2c9455"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.937315 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.937333 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.937424 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f6b5259e-bc29-45fb-b54a-9ea88b2c9455" (UID: "f6b5259e-bc29-45fb-b54a-9ea88b2c9455"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.941216 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f6b5259e-bc29-45fb-b54a-9ea88b2c9455" (UID: "f6b5259e-bc29-45fb-b54a-9ea88b2c9455"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.944124 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-kube-api-access-pgfwf" (OuterVolumeSpecName: "kube-api-access-pgfwf") pod "f6b5259e-bc29-45fb-b54a-9ea88b2c9455" (UID: "f6b5259e-bc29-45fb-b54a-9ea88b2c9455"). InnerVolumeSpecName "kube-api-access-pgfwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:41 crc kubenswrapper[4793]: I0130 13:51:41.983343 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.037619 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331250ca-4896-4db5-9193-0bc4014543aa-serving-cert\") pod \"331250ca-4896-4db5-9193-0bc4014543aa\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.037860 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-config\") pod \"331250ca-4896-4db5-9193-0bc4014543aa\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.037931 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj76j\" (UniqueName: \"kubernetes.io/projected/331250ca-4896-4db5-9193-0bc4014543aa-kube-api-access-jj76j\") pod \"331250ca-4896-4db5-9193-0bc4014543aa\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.038027 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-client-ca\") pod \"331250ca-4896-4db5-9193-0bc4014543aa\" (UID: \"331250ca-4896-4db5-9193-0bc4014543aa\") " Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.038242 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.038303 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.038398 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgfwf\" (UniqueName: \"kubernetes.io/projected/f6b5259e-bc29-45fb-b54a-9ea88b2c9455-kube-api-access-pgfwf\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.038944 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-client-ca" (OuterVolumeSpecName: "client-ca") pod "331250ca-4896-4db5-9193-0bc4014543aa" (UID: "331250ca-4896-4db5-9193-0bc4014543aa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.038975 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-config" (OuterVolumeSpecName: "config") pod "331250ca-4896-4db5-9193-0bc4014543aa" (UID: "331250ca-4896-4db5-9193-0bc4014543aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.041499 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/331250ca-4896-4db5-9193-0bc4014543aa-kube-api-access-jj76j" (OuterVolumeSpecName: "kube-api-access-jj76j") pod "331250ca-4896-4db5-9193-0bc4014543aa" (UID: "331250ca-4896-4db5-9193-0bc4014543aa"). InnerVolumeSpecName "kube-api-access-jj76j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.042028 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/331250ca-4896-4db5-9193-0bc4014543aa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "331250ca-4896-4db5-9193-0bc4014543aa" (UID: "331250ca-4896-4db5-9193-0bc4014543aa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.139128 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331250ca-4896-4db5-9193-0bc4014543aa-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.139212 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.139226 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj76j\" (UniqueName: \"kubernetes.io/projected/331250ca-4896-4db5-9193-0bc4014543aa-kube-api-access-jj76j\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.139240 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/331250ca-4896-4db5-9193-0bc4014543aa-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.341100 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" event={"ID":"f6b5259e-bc29-45fb-b54a-9ea88b2c9455","Type":"ContainerDied","Data":"5b10f9fb8b30b6886a920ccc357efc6e18c777018ff73968b7b489e1cd955680"} Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.341165 4793 scope.go:117] "RemoveContainer" containerID="5fd6a852dcf845aab42cc9dc74f3e773cd0bb7e06a1fd43cd8a36865b0b6cfb9" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.341933 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7494b498cc-pw58f" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.344844 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" event={"ID":"331250ca-4896-4db5-9193-0bc4014543aa","Type":"ContainerDied","Data":"2c5af4b42b4104017ffa38ef067d7472affee0ab8c8ae6656bb2b0ae3714df50"} Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.344928 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.379614 4793 scope.go:117] "RemoveContainer" containerID="bfd8ea71474cacbd139e6aa78a900da8a61bbb4015df2e4c9fa0f4dcc58743f6" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.382355 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b"] Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.394558 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-576768b7d7-jzc5b"] Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.406815 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="331250ca-4896-4db5-9193-0bc4014543aa" path="/var/lib/kubelet/pods/331250ca-4896-4db5-9193-0bc4014543aa/volumes" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.407376 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7494b498cc-pw58f"] Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.407417 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7494b498cc-pw58f"] Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.438211 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86cbff96d8-xtxlp"] Jan 30 13:51:42 crc kubenswrapper[4793]: E0130 13:51:42.438535 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="331250ca-4896-4db5-9193-0bc4014543aa" containerName="route-controller-manager" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.438550 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="331250ca-4896-4db5-9193-0bc4014543aa" containerName="route-controller-manager" Jan 30 13:51:42 crc kubenswrapper[4793]: E0130 13:51:42.438560 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b5259e-bc29-45fb-b54a-9ea88b2c9455" containerName="controller-manager" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.438566 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b5259e-bc29-45fb-b54a-9ea88b2c9455" containerName="controller-manager" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.438669 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b5259e-bc29-45fb-b54a-9ea88b2c9455" containerName="controller-manager" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.438683 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="331250ca-4896-4db5-9193-0bc4014543aa" containerName="route-controller-manager" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.439309 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.442111 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.442397 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.442558 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.442729 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.443014 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.443139 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.444285 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-client-ca\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.444313 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnwlb\" (UniqueName: \"kubernetes.io/projected/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-kube-api-access-bnwlb\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.444335 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-serving-cert\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.444378 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-proxy-ca-bundles\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.444409 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-config\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.451365 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw"] Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.452439 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.453144 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.455539 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.455650 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.455719 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.455850 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.455931 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.455969 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.459927 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw"] Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.466107 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86cbff96d8-xtxlp"] Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545481 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19a2f709-4d35-44f7-a44f-ab7a40157469-config\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545535 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdhlb\" (UniqueName: \"kubernetes.io/projected/19a2f709-4d35-44f7-a44f-ab7a40157469-kube-api-access-xdhlb\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545566 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19a2f709-4d35-44f7-a44f-ab7a40157469-serving-cert\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545591 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19a2f709-4d35-44f7-a44f-ab7a40157469-client-ca\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545645 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-client-ca\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545675 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnwlb\" (UniqueName: \"kubernetes.io/projected/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-kube-api-access-bnwlb\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545699 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-serving-cert\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545731 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-proxy-ca-bundles\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.545765 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-config\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.547565 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-client-ca\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.548482 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-proxy-ca-bundles\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.549581 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-config\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.550822 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-serving-cert\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.567696 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnwlb\" (UniqueName: \"kubernetes.io/projected/4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8-kube-api-access-bnwlb\") pod \"controller-manager-86cbff96d8-xtxlp\" (UID: \"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8\") " pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.646653 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19a2f709-4d35-44f7-a44f-ab7a40157469-config\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.646718 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdhlb\" (UniqueName: \"kubernetes.io/projected/19a2f709-4d35-44f7-a44f-ab7a40157469-kube-api-access-xdhlb\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.646747 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19a2f709-4d35-44f7-a44f-ab7a40157469-serving-cert\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.646771 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19a2f709-4d35-44f7-a44f-ab7a40157469-client-ca\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.647629 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19a2f709-4d35-44f7-a44f-ab7a40157469-client-ca\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.648342 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19a2f709-4d35-44f7-a44f-ab7a40157469-config\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.650066 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19a2f709-4d35-44f7-a44f-ab7a40157469-serving-cert\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.667839 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdhlb\" (UniqueName: \"kubernetes.io/projected/19a2f709-4d35-44f7-a44f-ab7a40157469-kube-api-access-xdhlb\") pod \"route-controller-manager-56c68f6bcb-5frrw\" (UID: \"19a2f709-4d35-44f7-a44f-ab7a40157469\") " pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.770070 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.781719 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:42 crc kubenswrapper[4793]: I0130 13:51:42.985459 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86cbff96d8-xtxlp"] Jan 30 13:51:43 crc kubenswrapper[4793]: I0130 13:51:43.025170 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw"] Jan 30 13:51:43 crc kubenswrapper[4793]: I0130 13:51:43.364311 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" event={"ID":"19a2f709-4d35-44f7-a44f-ab7a40157469","Type":"ContainerStarted","Data":"cec1fbbf6d73f1f7b56b5701008347818034a74cf6cec9e99744af4f5bd2e863"} Jan 30 13:51:43 crc kubenswrapper[4793]: I0130 13:51:43.364353 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" event={"ID":"19a2f709-4d35-44f7-a44f-ab7a40157469","Type":"ContainerStarted","Data":"7b0092efa97ac65157c72d9464478a7355ff3c6b5b2f3e2fdf538ee99d4e5bf3"} Jan 30 13:51:43 crc kubenswrapper[4793]: I0130 13:51:43.367415 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" event={"ID":"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8","Type":"ContainerStarted","Data":"72704c66c3729cb093fbbd41eeb70141ec2256d549d5988325a79c0dd98919c3"} Jan 30 13:51:43 crc kubenswrapper[4793]: I0130 13:51:43.367443 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" event={"ID":"4579bb6b-ff5b-4b7f-a01c-1cf0809d38f8","Type":"ContainerStarted","Data":"a3508b306af17b26cb81b0c9ba3ee0eeb0a48915fe92361a1ce14cc6c384f368"} Jan 30 13:51:43 crc kubenswrapper[4793]: I0130 13:51:43.368013 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:43 crc kubenswrapper[4793]: I0130 13:51:43.373891 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" Jan 30 13:51:43 crc kubenswrapper[4793]: I0130 13:51:43.402330 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86cbff96d8-xtxlp" podStartSLOduration=3.402310729 podStartE2EDuration="3.402310729s" podCreationTimestamp="2026-01-30 13:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:51:43.389213186 +0000 UTC m=+514.090561697" watchObservedRunningTime="2026-01-30 13:51:43.402310729 +0000 UTC m=+514.103659230" Jan 30 13:51:44 crc kubenswrapper[4793]: I0130 13:51:44.376027 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:44 crc kubenswrapper[4793]: I0130 13:51:44.380850 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" Jan 30 13:51:44 crc kubenswrapper[4793]: I0130 13:51:44.398941 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56c68f6bcb-5frrw" podStartSLOduration=3.398924348 podStartE2EDuration="3.398924348s" podCreationTimestamp="2026-01-30 13:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:51:44.396416453 +0000 UTC m=+515.097764954" watchObservedRunningTime="2026-01-30 13:51:44.398924348 +0000 UTC m=+515.100272839" Jan 30 13:51:44 crc kubenswrapper[4793]: I0130 13:51:44.410588 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b5259e-bc29-45fb-b54a-9ea88b2c9455" path="/var/lib/kubelet/pods/f6b5259e-bc29-45fb-b54a-9ea88b2c9455/volumes" Jan 30 13:51:53 crc kubenswrapper[4793]: I0130 13:51:53.579527 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" podUID="4a64abca-3318-4208-8edb-1474e0ba5f2f" containerName="oauth-openshift" containerID="cri-o://2275a87f84b4ec94a142778010cf54bfc2388e423117a117dbf57f37d1a87794" gracePeriod=15 Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.434842 4793 generic.go:334] "Generic (PLEG): container finished" podID="4a64abca-3318-4208-8edb-1474e0ba5f2f" containerID="2275a87f84b4ec94a142778010cf54bfc2388e423117a117dbf57f37d1a87794" exitCode=0 Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.434895 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" event={"ID":"4a64abca-3318-4208-8edb-1474e0ba5f2f","Type":"ContainerDied","Data":"2275a87f84b4ec94a142778010cf54bfc2388e423117a117dbf57f37d1a87794"} Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.700266 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.746041 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz"] Jan 30 13:51:54 crc kubenswrapper[4793]: E0130 13:51:54.746360 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a64abca-3318-4208-8edb-1474e0ba5f2f" containerName="oauth-openshift" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.746381 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a64abca-3318-4208-8edb-1474e0ba5f2f" containerName="oauth-openshift" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.746483 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a64abca-3318-4208-8edb-1474e0ba5f2f" containerName="oauth-openshift" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.746965 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.751351 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz"] Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.821989 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-session\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822449 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vhgb\" (UniqueName: \"kubernetes.io/projected/4a64abca-3318-4208-8edb-1474e0ba5f2f-kube-api-access-4vhgb\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822498 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-dir\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822531 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-router-certs\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822563 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-serving-cert\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822576 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822589 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-policies\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822629 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-idp-0-file-data\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822654 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-service-ca\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822681 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-cliconfig\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822702 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-error\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822725 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-trusted-ca-bundle\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822746 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-login\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822776 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-provider-selection\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822799 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-ocp-branding-template\") pod \"4a64abca-3318-4208-8edb-1474e0ba5f2f\" (UID: \"4a64abca-3318-4208-8edb-1474e0ba5f2f\") " Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822875 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822903 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzvn6\" (UniqueName: \"kubernetes.io/projected/d0b6d37a-e922-4801-b3ef-78204821353f-kube-api-access-kzvn6\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822924 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-audit-policies\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822944 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822963 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.822983 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-session\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823013 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823034 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-error\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823080 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823106 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0b6d37a-e922-4801-b3ef-78204821353f-audit-dir\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823140 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823160 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-login\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823189 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823210 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823255 4793 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823334 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.823739 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.824238 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.825463 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.828291 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.828573 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.828663 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a64abca-3318-4208-8edb-1474e0ba5f2f-kube-api-access-4vhgb" (OuterVolumeSpecName: "kube-api-access-4vhgb") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "kube-api-access-4vhgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.828920 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.829139 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.829295 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.830260 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.830620 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.838668 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "4a64abca-3318-4208-8edb-1474e0ba5f2f" (UID: "4a64abca-3318-4208-8edb-1474e0ba5f2f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.923836 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.923901 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-login\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.923929 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.923944 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924023 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzvn6\" (UniqueName: \"kubernetes.io/projected/d0b6d37a-e922-4801-b3ef-78204821353f-kube-api-access-kzvn6\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924038 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-audit-policies\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924076 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924092 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924108 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-session\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924140 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924170 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-error\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924203 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924237 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0b6d37a-e922-4801-b3ef-78204821353f-audit-dir\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924286 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924296 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924306 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924315 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924324 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924334 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924344 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924355 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924365 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924373 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vhgb\" (UniqueName: \"kubernetes.io/projected/4a64abca-3318-4208-8edb-1474e0ba5f2f-kube-api-access-4vhgb\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924382 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924390 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a64abca-3318-4208-8edb-1474e0ba5f2f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924398 4793 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a64abca-3318-4208-8edb-1474e0ba5f2f-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.924627 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0b6d37a-e922-4801-b3ef-78204821353f-audit-dir\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.925711 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.927811 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.927846 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-login\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.928251 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.928267 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.928501 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.928606 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-audit-policies\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.928860 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.929125 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-session\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.929867 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.929944 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.932509 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d0b6d37a-e922-4801-b3ef-78204821353f-v4-0-config-user-template-error\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:54 crc kubenswrapper[4793]: I0130 13:51:54.946091 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzvn6\" (UniqueName: \"kubernetes.io/projected/d0b6d37a-e922-4801-b3ef-78204821353f-kube-api-access-kzvn6\") pod \"oauth-openshift-6867fd7dd7-mlfhz\" (UID: \"d0b6d37a-e922-4801-b3ef-78204821353f\") " pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:55 crc kubenswrapper[4793]: I0130 13:51:55.067226 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:55 crc kubenswrapper[4793]: I0130 13:51:55.314191 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz"] Jan 30 13:51:55 crc kubenswrapper[4793]: I0130 13:51:55.443068 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" event={"ID":"4a64abca-3318-4208-8edb-1474e0ba5f2f","Type":"ContainerDied","Data":"0e39fca869bb577560ccf5c5e0fd7294441d98f691e7a0b7c896fff632efcbeb"} Jan 30 13:51:55 crc kubenswrapper[4793]: I0130 13:51:55.443113 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s2mcj" Jan 30 13:51:55 crc kubenswrapper[4793]: I0130 13:51:55.443129 4793 scope.go:117] "RemoveContainer" containerID="2275a87f84b4ec94a142778010cf54bfc2388e423117a117dbf57f37d1a87794" Jan 30 13:51:55 crc kubenswrapper[4793]: I0130 13:51:55.447935 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" event={"ID":"d0b6d37a-e922-4801-b3ef-78204821353f","Type":"ContainerStarted","Data":"bbfcddcabfb6a27a0277b67b3c2861ba194b2dde5aeaa47c2123bc529e8a0e4f"} Jan 30 13:51:55 crc kubenswrapper[4793]: I0130 13:51:55.477553 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2mcj"] Jan 30 13:51:55 crc kubenswrapper[4793]: I0130 13:51:55.481000 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2mcj"] Jan 30 13:51:55 crc kubenswrapper[4793]: E0130 13:51:55.568907 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a64abca_3318_4208_8edb_1474e0ba5f2f.slice/crio-0e39fca869bb577560ccf5c5e0fd7294441d98f691e7a0b7c896fff632efcbeb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a64abca_3318_4208_8edb_1474e0ba5f2f.slice\": RecentStats: unable to find data in memory cache]" Jan 30 13:51:56 crc kubenswrapper[4793]: I0130 13:51:56.414333 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a64abca-3318-4208-8edb-1474e0ba5f2f" path="/var/lib/kubelet/pods/4a64abca-3318-4208-8edb-1474e0ba5f2f/volumes" Jan 30 13:51:56 crc kubenswrapper[4793]: I0130 13:51:56.463542 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" event={"ID":"d0b6d37a-e922-4801-b3ef-78204821353f","Type":"ContainerStarted","Data":"a4995f51bf42afc49c864cd27829050a7585e6c40004540a25dd60f10256140a"} Jan 30 13:51:56 crc kubenswrapper[4793]: I0130 13:51:56.464210 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:56 crc kubenswrapper[4793]: I0130 13:51:56.474478 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" Jan 30 13:51:56 crc kubenswrapper[4793]: I0130 13:51:56.515026 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6867fd7dd7-mlfhz" podStartSLOduration=28.515011313 podStartE2EDuration="28.515011313s" podCreationTimestamp="2026-01-30 13:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:51:56.488331764 +0000 UTC m=+527.189680265" watchObservedRunningTime="2026-01-30 13:51:56.515011313 +0000 UTC m=+527.216359804" Jan 30 13:52:13 crc kubenswrapper[4793]: I0130 13:52:13.537295 4793 scope.go:117] "RemoveContainer" containerID="9fce52fd4df200cd47b1ec015ae5f6e141a21db87359d7fd523e3ede8826e2ec" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.320847 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g9t8x"] Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.322582 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g9t8x" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerName="registry-server" containerID="cri-o://393188ba22f128de9c0a011df4faebd2b1d1eb0a5b1ea461fc46bcc26c5a26e1" gracePeriod=30 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.324722 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6qnl2"] Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.325110 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6qnl2" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerName="registry-server" containerID="cri-o://84cd655416136fa3e73cac54a43941e805b3e648275563df361a78561fee0a01" gracePeriod=30 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.344191 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zd5lq"] Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.344693 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" containerID="cri-o://12a6dc8d1fe12e66c88c1e9af34c91aecbf032c69850554757bd6c716f87e793" gracePeriod=30 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.365342 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvlgd"] Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.365848 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kvlgd" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerName="registry-server" containerID="cri-o://539c3853e42d9d22bfa167a67e472131adad4bd97a97c725d04b9f2fb5b89b55" gracePeriod=30 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.376758 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zkjbp"] Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.378303 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.386768 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vn6kf"] Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.387067 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vn6kf" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" containerName="registry-server" containerID="cri-o://04cab8777968c78ddbe77df944f0557b099be348daaec3a0b9ff7c7f4c0c511b" gracePeriod=30 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.447373 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zkjbp"] Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.547458 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5834bf4b-676f-4ece-bcee-28949a7109ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.547787 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5834bf4b-676f-4ece-bcee-28949a7109ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.547974 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsdcw\" (UniqueName: \"kubernetes.io/projected/5834bf4b-676f-4ece-bcee-28949a7109ca-kube-api-access-fsdcw\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.649235 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5834bf4b-676f-4ece-bcee-28949a7109ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.649285 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5834bf4b-676f-4ece-bcee-28949a7109ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.649329 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsdcw\" (UniqueName: \"kubernetes.io/projected/5834bf4b-676f-4ece-bcee-28949a7109ca-kube-api-access-fsdcw\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.650825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5834bf4b-676f-4ece-bcee-28949a7109ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.655362 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5834bf4b-676f-4ece-bcee-28949a7109ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.664665 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsdcw\" (UniqueName: \"kubernetes.io/projected/5834bf4b-676f-4ece-bcee-28949a7109ca-kube-api-access-fsdcw\") pod \"marketplace-operator-79b997595-zkjbp\" (UID: \"5834bf4b-676f-4ece-bcee-28949a7109ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.816443 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.949657 4793 generic.go:334] "Generic (PLEG): container finished" podID="89a43c58-d327-429a-96cd-9f9f5393368a" containerID="04cab8777968c78ddbe77df944f0557b099be348daaec3a0b9ff7c7f4c0c511b" exitCode=0 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.949890 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vn6kf" event={"ID":"89a43c58-d327-429a-96cd-9f9f5393368a","Type":"ContainerDied","Data":"04cab8777968c78ddbe77df944f0557b099be348daaec3a0b9ff7c7f4c0c511b"} Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.955545 4793 generic.go:334] "Generic (PLEG): container finished" podID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerID="539c3853e42d9d22bfa167a67e472131adad4bd97a97c725d04b9f2fb5b89b55" exitCode=0 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.955722 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvlgd" event={"ID":"08b55ba0-087d-42ec-a0c5-538f0a3c0987","Type":"ContainerDied","Data":"539c3853e42d9d22bfa167a67e472131adad4bd97a97c725d04b9f2fb5b89b55"} Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.958882 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zd5lq_ee8452f4-fe2b-44d0-a26a-f7171e108fc9/marketplace-operator/3.log" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.958946 4793 generic.go:334] "Generic (PLEG): container finished" podID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerID="12a6dc8d1fe12e66c88c1e9af34c91aecbf032c69850554757bd6c716f87e793" exitCode=0 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.959326 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerDied","Data":"12a6dc8d1fe12e66c88c1e9af34c91aecbf032c69850554757bd6c716f87e793"} Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.959396 4793 scope.go:117] "RemoveContainer" containerID="010d81416921c00a0cfdea55cdfad52a809a96bd403680df2df4978f6d97ee18" Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.973234 4793 generic.go:334] "Generic (PLEG): container finished" podID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerID="84cd655416136fa3e73cac54a43941e805b3e648275563df361a78561fee0a01" exitCode=0 Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.973303 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qnl2" event={"ID":"840c8b00-73a4-4378-b5a8-83f2595916a4","Type":"ContainerDied","Data":"84cd655416136fa3e73cac54a43941e805b3e648275563df361a78561fee0a01"} Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.979071 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9t8x" event={"ID":"b34660b0-a161-4587-96a6-1a86a2e3f632","Type":"ContainerDied","Data":"393188ba22f128de9c0a011df4faebd2b1d1eb0a5b1ea461fc46bcc26c5a26e1"} Jan 30 13:52:19 crc kubenswrapper[4793]: I0130 13:52:19.979036 4793 generic.go:334] "Generic (PLEG): container finished" podID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerID="393188ba22f128de9c0a011df4faebd2b1d1eb0a5b1ea461fc46bcc26c5a26e1" exitCode=0 Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.261013 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zkjbp"] Jan 30 13:52:20 crc kubenswrapper[4793]: W0130 13:52:20.291320 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5834bf4b_676f_4ece_bcee_28949a7109ca.slice/crio-5da7a1e8e45df963d762476724080c5153a328af9a9a0e9890defec8c6bf8ae5 WatchSource:0}: Error finding container 5da7a1e8e45df963d762476724080c5153a328af9a9a0e9890defec8c6bf8ae5: Status 404 returned error can't find the container with id 5da7a1e8e45df963d762476724080c5153a328af9a9a0e9890defec8c6bf8ae5 Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.365904 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.447737 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.530440 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.537569 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.541347 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.559577 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-catalog-content\") pod \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.559619 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwrln\" (UniqueName: \"kubernetes.io/projected/89a43c58-d327-429a-96cd-9f9f5393368a-kube-api-access-pwrln\") pod \"89a43c58-d327-429a-96cd-9f9f5393368a\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.559657 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-catalog-content\") pod \"89a43c58-d327-429a-96cd-9f9f5393368a\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.559687 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-utilities\") pod \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.559716 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-utilities\") pod \"89a43c58-d327-429a-96cd-9f9f5393368a\" (UID: \"89a43c58-d327-429a-96cd-9f9f5393368a\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.559748 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhvt4\" (UniqueName: \"kubernetes.io/projected/08b55ba0-087d-42ec-a0c5-538f0a3c0987-kube-api-access-nhvt4\") pod \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\" (UID: \"08b55ba0-087d-42ec-a0c5-538f0a3c0987\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.565715 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b55ba0-087d-42ec-a0c5-538f0a3c0987-kube-api-access-nhvt4" (OuterVolumeSpecName: "kube-api-access-nhvt4") pod "08b55ba0-087d-42ec-a0c5-538f0a3c0987" (UID: "08b55ba0-087d-42ec-a0c5-538f0a3c0987"). InnerVolumeSpecName "kube-api-access-nhvt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.567037 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-utilities" (OuterVolumeSpecName: "utilities") pod "08b55ba0-087d-42ec-a0c5-538f0a3c0987" (UID: "08b55ba0-087d-42ec-a0c5-538f0a3c0987"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.582165 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-utilities" (OuterVolumeSpecName: "utilities") pod "89a43c58-d327-429a-96cd-9f9f5393368a" (UID: "89a43c58-d327-429a-96cd-9f9f5393368a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.584540 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a43c58-d327-429a-96cd-9f9f5393368a-kube-api-access-pwrln" (OuterVolumeSpecName: "kube-api-access-pwrln") pod "89a43c58-d327-429a-96cd-9f9f5393368a" (UID: "89a43c58-d327-429a-96cd-9f9f5393368a"). InnerVolumeSpecName "kube-api-access-pwrln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.587425 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08b55ba0-087d-42ec-a0c5-538f0a3c0987" (UID: "08b55ba0-087d-42ec-a0c5-538f0a3c0987"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662318 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh6ft\" (UniqueName: \"kubernetes.io/projected/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-kube-api-access-sh6ft\") pod \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662401 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-operator-metrics\") pod \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662421 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9nnp\" (UniqueName: \"kubernetes.io/projected/840c8b00-73a4-4378-b5a8-83f2595916a4-kube-api-access-p9nnp\") pod \"840c8b00-73a4-4378-b5a8-83f2595916a4\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662482 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-utilities\") pod \"840c8b00-73a4-4378-b5a8-83f2595916a4\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662537 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-catalog-content\") pod \"b34660b0-a161-4587-96a6-1a86a2e3f632\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662560 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-trusted-ca\") pod \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\" (UID: \"ee8452f4-fe2b-44d0-a26a-f7171e108fc9\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662574 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg5zv\" (UniqueName: \"kubernetes.io/projected/b34660b0-a161-4587-96a6-1a86a2e3f632-kube-api-access-zg5zv\") pod \"b34660b0-a161-4587-96a6-1a86a2e3f632\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662641 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-catalog-content\") pod \"840c8b00-73a4-4378-b5a8-83f2595916a4\" (UID: \"840c8b00-73a4-4378-b5a8-83f2595916a4\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.662668 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-utilities\") pod \"b34660b0-a161-4587-96a6-1a86a2e3f632\" (UID: \"b34660b0-a161-4587-96a6-1a86a2e3f632\") " Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.663520 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwrln\" (UniqueName: \"kubernetes.io/projected/89a43c58-d327-429a-96cd-9f9f5393368a-kube-api-access-pwrln\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.663547 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.663558 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.663612 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhvt4\" (UniqueName: \"kubernetes.io/projected/08b55ba0-087d-42ec-a0c5-538f0a3c0987-kube-api-access-nhvt4\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.663624 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b55ba0-087d-42ec-a0c5-538f0a3c0987-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.666438 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-utilities" (OuterVolumeSpecName: "utilities") pod "b34660b0-a161-4587-96a6-1a86a2e3f632" (UID: "b34660b0-a161-4587-96a6-1a86a2e3f632"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.671866 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-utilities" (OuterVolumeSpecName: "utilities") pod "840c8b00-73a4-4378-b5a8-83f2595916a4" (UID: "840c8b00-73a4-4378-b5a8-83f2595916a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.672459 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "ee8452f4-fe2b-44d0-a26a-f7171e108fc9" (UID: "ee8452f4-fe2b-44d0-a26a-f7171e108fc9"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.680259 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b34660b0-a161-4587-96a6-1a86a2e3f632-kube-api-access-zg5zv" (OuterVolumeSpecName: "kube-api-access-zg5zv") pod "b34660b0-a161-4587-96a6-1a86a2e3f632" (UID: "b34660b0-a161-4587-96a6-1a86a2e3f632"). InnerVolumeSpecName "kube-api-access-zg5zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.686186 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "ee8452f4-fe2b-44d0-a26a-f7171e108fc9" (UID: "ee8452f4-fe2b-44d0-a26a-f7171e108fc9"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.707594 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/840c8b00-73a4-4378-b5a8-83f2595916a4-kube-api-access-p9nnp" (OuterVolumeSpecName: "kube-api-access-p9nnp") pod "840c8b00-73a4-4378-b5a8-83f2595916a4" (UID: "840c8b00-73a4-4378-b5a8-83f2595916a4"). InnerVolumeSpecName "kube-api-access-p9nnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.715178 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-kube-api-access-sh6ft" (OuterVolumeSpecName: "kube-api-access-sh6ft") pod "ee8452f4-fe2b-44d0-a26a-f7171e108fc9" (UID: "ee8452f4-fe2b-44d0-a26a-f7171e108fc9"). InnerVolumeSpecName "kube-api-access-sh6ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.728531 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b34660b0-a161-4587-96a6-1a86a2e3f632" (UID: "b34660b0-a161-4587-96a6-1a86a2e3f632"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.735629 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89a43c58-d327-429a-96cd-9f9f5393368a" (UID: "89a43c58-d327-429a-96cd-9f9f5393368a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.751756 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "840c8b00-73a4-4378-b5a8-83f2595916a4" (UID: "840c8b00-73a4-4378-b5a8-83f2595916a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765195 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765228 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765240 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh6ft\" (UniqueName: \"kubernetes.io/projected/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-kube-api-access-sh6ft\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765254 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765263 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9nnp\" (UniqueName: \"kubernetes.io/projected/840c8b00-73a4-4378-b5a8-83f2595916a4-kube-api-access-p9nnp\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765271 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/840c8b00-73a4-4378-b5a8-83f2595916a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765279 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a43c58-d327-429a-96cd-9f9f5393368a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765287 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34660b0-a161-4587-96a6-1a86a2e3f632-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765295 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee8452f4-fe2b-44d0-a26a-f7171e108fc9-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.765303 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg5zv\" (UniqueName: \"kubernetes.io/projected/b34660b0-a161-4587-96a6-1a86a2e3f632-kube-api-access-zg5zv\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.985348 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvlgd" event={"ID":"08b55ba0-087d-42ec-a0c5-538f0a3c0987","Type":"ContainerDied","Data":"e438cc892f7ad0406801bd88b27ea7d9474a125c514f11d8ac2ab76f42215f27"} Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.985400 4793 scope.go:117] "RemoveContainer" containerID="539c3853e42d9d22bfa167a67e472131adad4bd97a97c725d04b9f2fb5b89b55" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.985884 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvlgd" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.986639 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" event={"ID":"ee8452f4-fe2b-44d0-a26a-f7171e108fc9","Type":"ContainerDied","Data":"97c187117ac894b4f40744eaace0837c1dade5f185e1a06955e03936c650d6b8"} Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.986725 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zd5lq" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.988971 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qnl2" event={"ID":"840c8b00-73a4-4378-b5a8-83f2595916a4","Type":"ContainerDied","Data":"c106e074002678528ae31ccdf1bb58932690b2a742055da2e9f297d7f5cc6c7c"} Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.989094 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qnl2" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.995232 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g9t8x" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.995263 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9t8x" event={"ID":"b34660b0-a161-4587-96a6-1a86a2e3f632","Type":"ContainerDied","Data":"0e22ed488b0d95eaf0cf80ba9106bf9da157b5ab0630c5fce06e88b1a1a7e207"} Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.996393 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" event={"ID":"5834bf4b-676f-4ece-bcee-28949a7109ca","Type":"ContainerStarted","Data":"b8fcf2220c6b92f86f590aee94530fd0f54a302ad02a5fa5cce8ea811b739ea5"} Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.996434 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" event={"ID":"5834bf4b-676f-4ece-bcee-28949a7109ca","Type":"ContainerStarted","Data":"5da7a1e8e45df963d762476724080c5153a328af9a9a0e9890defec8c6bf8ae5"} Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.996727 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:20 crc kubenswrapper[4793]: I0130 13:52:20.998829 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.001388 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vn6kf" event={"ID":"89a43c58-d327-429a-96cd-9f9f5393368a","Type":"ContainerDied","Data":"1f4643d93c77f9c1fa9d15f80b1a4b9e9c2ad2fc279deeae64b1715da148c011"} Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.001456 4793 scope.go:117] "RemoveContainer" containerID="a39b5636265cc040beb743a7d92b7de07f6a61cbb255d62d9adbf1ef86fd75b0" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.001457 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vn6kf" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.023819 4793 scope.go:117] "RemoveContainer" containerID="bf4b42ce53f022eba5077f61f642433a8e1373279291fcdbe9bff308d17c0e0d" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.065981 4793 scope.go:117] "RemoveContainer" containerID="12a6dc8d1fe12e66c88c1e9af34c91aecbf032c69850554757bd6c716f87e793" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.070295 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zkjbp" podStartSLOduration=2.070263998 podStartE2EDuration="2.070263998s" podCreationTimestamp="2026-01-30 13:52:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:52:21.02946036 +0000 UTC m=+551.730808851" watchObservedRunningTime="2026-01-30 13:52:21.070263998 +0000 UTC m=+551.771612489" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.099912 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvlgd"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.109315 4793 scope.go:117] "RemoveContainer" containerID="84cd655416136fa3e73cac54a43941e805b3e648275563df361a78561fee0a01" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.116092 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvlgd"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.130644 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g9t8x"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.148007 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g9t8x"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.154399 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zd5lq"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.169576 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zd5lq"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.175701 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6qnl2"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.180098 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6qnl2"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.180465 4793 scope.go:117] "RemoveContainer" containerID="3991b8c8da8221b7422f215779cd2c7fe6fecd1213e2421f8f1c4e3c851baccd" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.184362 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vn6kf"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.188404 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vn6kf"] Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.212730 4793 scope.go:117] "RemoveContainer" containerID="f652789a637248503c2fc91700a36ad3f9de2a0dc0aa687e53dccfa3f8c0a8b5" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.234220 4793 scope.go:117] "RemoveContainer" containerID="393188ba22f128de9c0a011df4faebd2b1d1eb0a5b1ea461fc46bcc26c5a26e1" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.255547 4793 scope.go:117] "RemoveContainer" containerID="0a9be6fb1fc0d8a14f1edca7b047f49698da2a9d4b0fc318118d31f74ad0506a" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.312721 4793 scope.go:117] "RemoveContainer" containerID="3b482005c537462a0ede36ab68d9d608d2121842b0870338080990e3d66e4059" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.336757 4793 scope.go:117] "RemoveContainer" containerID="04cab8777968c78ddbe77df944f0557b099be348daaec3a0b9ff7c7f4c0c511b" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.359279 4793 scope.go:117] "RemoveContainer" containerID="17de5c4fa1f8a1615ce34e313bf58b61c0d69abdba7886409d1567e3fa60d503" Jan 30 13:52:21 crc kubenswrapper[4793]: I0130 13:52:21.379141 4793 scope.go:117] "RemoveContainer" containerID="1292ed33cb4910e7379d650e9bdaa57110f788906801a44590e292cca7705790" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131287 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rgznc"] Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131543 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerName="extract-content" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131560 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerName="extract-content" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131570 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131578 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131593 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" containerName="extract-content" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131600 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" containerName="extract-content" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131609 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131616 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131628 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerName="extract-utilities" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131635 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerName="extract-utilities" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131659 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131670 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131678 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131685 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131695 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131702 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131712 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131720 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131729 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" containerName="extract-utilities" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131736 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" containerName="extract-utilities" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131746 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerName="extract-content" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131752 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerName="extract-content" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131760 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131767 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131777 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerName="extract-utilities" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131784 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerName="extract-utilities" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131795 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131801 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131813 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerName="extract-content" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131820 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerName="extract-content" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.131831 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerName="extract-utilities" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131840 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerName="extract-utilities" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131945 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131959 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131971 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131980 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131988 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.131999 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.132010 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" containerName="registry-server" Jan 30 13:52:22 crc kubenswrapper[4793]: E0130 13:52:22.132132 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.132143 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.132239 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.132250 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" containerName="marketplace-operator" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.136022 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.138107 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.142713 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgznc"] Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.281345 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8rmm\" (UniqueName: \"kubernetes.io/projected/79353c7a-f5cf-43e5-9c5a-443565d0cca7-kube-api-access-b8rmm\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.281932 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79353c7a-f5cf-43e5-9c5a-443565d0cca7-utilities\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.282114 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79353c7a-f5cf-43e5-9c5a-443565d0cca7-catalog-content\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.383392 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79353c7a-f5cf-43e5-9c5a-443565d0cca7-utilities\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.383908 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79353c7a-f5cf-43e5-9c5a-443565d0cca7-utilities\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.383918 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79353c7a-f5cf-43e5-9c5a-443565d0cca7-catalog-content\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.384010 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8rmm\" (UniqueName: \"kubernetes.io/projected/79353c7a-f5cf-43e5-9c5a-443565d0cca7-kube-api-access-b8rmm\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.384738 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79353c7a-f5cf-43e5-9c5a-443565d0cca7-catalog-content\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.407360 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8rmm\" (UniqueName: \"kubernetes.io/projected/79353c7a-f5cf-43e5-9c5a-443565d0cca7-kube-api-access-b8rmm\") pod \"redhat-marketplace-rgznc\" (UID: \"79353c7a-f5cf-43e5-9c5a-443565d0cca7\") " pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.409405 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b55ba0-087d-42ec-a0c5-538f0a3c0987" path="/var/lib/kubelet/pods/08b55ba0-087d-42ec-a0c5-538f0a3c0987/volumes" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.410202 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="840c8b00-73a4-4378-b5a8-83f2595916a4" path="/var/lib/kubelet/pods/840c8b00-73a4-4378-b5a8-83f2595916a4/volumes" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.410911 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a43c58-d327-429a-96cd-9f9f5393368a" path="/var/lib/kubelet/pods/89a43c58-d327-429a-96cd-9f9f5393368a/volumes" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.412123 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b34660b0-a161-4587-96a6-1a86a2e3f632" path="/var/lib/kubelet/pods/b34660b0-a161-4587-96a6-1a86a2e3f632/volumes" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.412843 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee8452f4-fe2b-44d0-a26a-f7171e108fc9" path="/var/lib/kubelet/pods/ee8452f4-fe2b-44d0-a26a-f7171e108fc9/volumes" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.458259 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:22 crc kubenswrapper[4793]: I0130 13:52:22.629681 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgznc"] Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.021255 4793 generic.go:334] "Generic (PLEG): container finished" podID="79353c7a-f5cf-43e5-9c5a-443565d0cca7" containerID="930e82898eecd32747e439313325fb5db69a9f46a5de40cf183e52e534aee9ca" exitCode=0 Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.021395 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgznc" event={"ID":"79353c7a-f5cf-43e5-9c5a-443565d0cca7","Type":"ContainerDied","Data":"930e82898eecd32747e439313325fb5db69a9f46a5de40cf183e52e534aee9ca"} Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.021433 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgznc" event={"ID":"79353c7a-f5cf-43e5-9c5a-443565d0cca7","Type":"ContainerStarted","Data":"316fa15aff1fca6d3d61f0e1f08e0c576e6fa49e0a4f5c9f26ce65b8a69939f8"} Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.023339 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.130441 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t5rxw"] Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.131826 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.133907 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.174726 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5rxw"] Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.299323 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be7bc1b-60e4-429d-b706-90063b00442e-catalog-content\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.299449 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be7bc1b-60e4-429d-b706-90063b00442e-utilities\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.299483 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkxbh\" (UniqueName: \"kubernetes.io/projected/6be7bc1b-60e4-429d-b706-90063b00442e-kube-api-access-nkxbh\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.400254 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be7bc1b-60e4-429d-b706-90063b00442e-catalog-content\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.400347 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be7bc1b-60e4-429d-b706-90063b00442e-utilities\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.400366 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkxbh\" (UniqueName: \"kubernetes.io/projected/6be7bc1b-60e4-429d-b706-90063b00442e-kube-api-access-nkxbh\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.400966 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be7bc1b-60e4-429d-b706-90063b00442e-utilities\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.402216 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be7bc1b-60e4-429d-b706-90063b00442e-catalog-content\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.426453 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkxbh\" (UniqueName: \"kubernetes.io/projected/6be7bc1b-60e4-429d-b706-90063b00442e-kube-api-access-nkxbh\") pod \"redhat-operators-t5rxw\" (UID: \"6be7bc1b-60e4-429d-b706-90063b00442e\") " pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.462963 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:23 crc kubenswrapper[4793]: I0130 13:52:23.641629 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5rxw"] Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.028731 4793 generic.go:334] "Generic (PLEG): container finished" podID="6be7bc1b-60e4-429d-b706-90063b00442e" containerID="aca04a4f1f2617025f87dff79f4716691f846f7673daa7e5d04c273110c42170" exitCode=0 Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.028775 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5rxw" event={"ID":"6be7bc1b-60e4-429d-b706-90063b00442e","Type":"ContainerDied","Data":"aca04a4f1f2617025f87dff79f4716691f846f7673daa7e5d04c273110c42170"} Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.028799 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5rxw" event={"ID":"6be7bc1b-60e4-429d-b706-90063b00442e","Type":"ContainerStarted","Data":"cb97b0929a7fa2cd74a9d4cf8809ccbd3fb47f01a4dd388a5e6cb18f2c97e1f3"} Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.531426 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lcb4v"] Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.532811 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.537661 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.558614 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lcb4v"] Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.715317 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvntf\" (UniqueName: \"kubernetes.io/projected/adcaff8e-ed88-4fa1-af55-aedc60d35481-kube-api-access-cvntf\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.715379 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adcaff8e-ed88-4fa1-af55-aedc60d35481-catalog-content\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.715499 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adcaff8e-ed88-4fa1-af55-aedc60d35481-utilities\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.816787 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvntf\" (UniqueName: \"kubernetes.io/projected/adcaff8e-ed88-4fa1-af55-aedc60d35481-kube-api-access-cvntf\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.817341 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adcaff8e-ed88-4fa1-af55-aedc60d35481-catalog-content\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.817512 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adcaff8e-ed88-4fa1-af55-aedc60d35481-utilities\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.817894 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adcaff8e-ed88-4fa1-af55-aedc60d35481-catalog-content\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.818139 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adcaff8e-ed88-4fa1-af55-aedc60d35481-utilities\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.841896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvntf\" (UniqueName: \"kubernetes.io/projected/adcaff8e-ed88-4fa1-af55-aedc60d35481-kube-api-access-cvntf\") pod \"community-operators-lcb4v\" (UID: \"adcaff8e-ed88-4fa1-af55-aedc60d35481\") " pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:24 crc kubenswrapper[4793]: I0130 13:52:24.871948 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.038712 4793 generic.go:334] "Generic (PLEG): container finished" podID="79353c7a-f5cf-43e5-9c5a-443565d0cca7" containerID="9b700715fdd4398f415461325325bd61f69b964ffd1362b02505fc5cc9b8afe1" exitCode=0 Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.039147 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgznc" event={"ID":"79353c7a-f5cf-43e5-9c5a-443565d0cca7","Type":"ContainerDied","Data":"9b700715fdd4398f415461325325bd61f69b964ffd1362b02505fc5cc9b8afe1"} Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.177910 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lcb4v"] Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.535817 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-67xsr"] Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.537703 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.539770 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.546168 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-67xsr"] Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.628562 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdkvd\" (UniqueName: \"kubernetes.io/projected/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-kube-api-access-rdkvd\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.628629 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-utilities\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.628707 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-catalog-content\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.730612 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-catalog-content\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.730666 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdkvd\" (UniqueName: \"kubernetes.io/projected/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-kube-api-access-rdkvd\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.730685 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-utilities\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.731200 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-catalog-content\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.731247 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-utilities\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.764243 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdkvd\" (UniqueName: \"kubernetes.io/projected/4a0cd3b8-afdf-4eb1-b818-565ce4d0647d-kube-api-access-rdkvd\") pod \"certified-operators-67xsr\" (UID: \"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d\") " pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: I0130 13:52:25.875096 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:25 crc kubenswrapper[4793]: E0130 13:52:25.913255 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6be7bc1b_60e4_429d_b706_90063b00442e.slice/crio-conmon-47e990fbb80040cf69648b7b7c078b3963a143cb2e576f475cf3b07883f90d34.scope\": RecentStats: unable to find data in memory cache]" Jan 30 13:52:26 crc kubenswrapper[4793]: I0130 13:52:26.047696 4793 generic.go:334] "Generic (PLEG): container finished" podID="6be7bc1b-60e4-429d-b706-90063b00442e" containerID="47e990fbb80040cf69648b7b7c078b3963a143cb2e576f475cf3b07883f90d34" exitCode=0 Jan 30 13:52:26 crc kubenswrapper[4793]: I0130 13:52:26.047772 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5rxw" event={"ID":"6be7bc1b-60e4-429d-b706-90063b00442e","Type":"ContainerDied","Data":"47e990fbb80040cf69648b7b7c078b3963a143cb2e576f475cf3b07883f90d34"} Jan 30 13:52:26 crc kubenswrapper[4793]: I0130 13:52:26.055061 4793 generic.go:334] "Generic (PLEG): container finished" podID="adcaff8e-ed88-4fa1-af55-aedc60d35481" containerID="a69823748d7cafe556ac4bb75e41342c6daf8cb5c0d166ea11440a37e56fac38" exitCode=0 Jan 30 13:52:26 crc kubenswrapper[4793]: I0130 13:52:26.055098 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcb4v" event={"ID":"adcaff8e-ed88-4fa1-af55-aedc60d35481","Type":"ContainerDied","Data":"a69823748d7cafe556ac4bb75e41342c6daf8cb5c0d166ea11440a37e56fac38"} Jan 30 13:52:26 crc kubenswrapper[4793]: I0130 13:52:26.055120 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcb4v" event={"ID":"adcaff8e-ed88-4fa1-af55-aedc60d35481","Type":"ContainerStarted","Data":"bf56ebe8af3ddb557a4352e48c282d6e46aeb85d3b9b270adfeaa714aef5b418"} Jan 30 13:52:26 crc kubenswrapper[4793]: W0130 13:52:26.245765 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a0cd3b8_afdf_4eb1_b818_565ce4d0647d.slice/crio-a366037dfbabfaf472a62671180ed50cb056d4acc52d227c689f195003e16b38 WatchSource:0}: Error finding container a366037dfbabfaf472a62671180ed50cb056d4acc52d227c689f195003e16b38: Status 404 returned error can't find the container with id a366037dfbabfaf472a62671180ed50cb056d4acc52d227c689f195003e16b38 Jan 30 13:52:26 crc kubenswrapper[4793]: I0130 13:52:26.246015 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-67xsr"] Jan 30 13:52:27 crc kubenswrapper[4793]: I0130 13:52:27.061802 4793 generic.go:334] "Generic (PLEG): container finished" podID="4a0cd3b8-afdf-4eb1-b818-565ce4d0647d" containerID="0cd9a1b7c5c52728ff5e012bf166e9b2ed9f732690a3ba82987c58f8a440a01b" exitCode=0 Jan 30 13:52:27 crc kubenswrapper[4793]: I0130 13:52:27.061984 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67xsr" event={"ID":"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d","Type":"ContainerDied","Data":"0cd9a1b7c5c52728ff5e012bf166e9b2ed9f732690a3ba82987c58f8a440a01b"} Jan 30 13:52:27 crc kubenswrapper[4793]: I0130 13:52:27.062199 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67xsr" event={"ID":"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d","Type":"ContainerStarted","Data":"a366037dfbabfaf472a62671180ed50cb056d4acc52d227c689f195003e16b38"} Jan 30 13:52:27 crc kubenswrapper[4793]: I0130 13:52:27.064948 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgznc" event={"ID":"79353c7a-f5cf-43e5-9c5a-443565d0cca7","Type":"ContainerStarted","Data":"6cc3f4a77ecb1125601f957830603c5160f420d3df61316dbe693a785008f6f6"} Jan 30 13:52:27 crc kubenswrapper[4793]: I0130 13:52:27.101006 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rgznc" podStartSLOduration=2.104613174 podStartE2EDuration="5.100989404s" podCreationTimestamp="2026-01-30 13:52:22 +0000 UTC" firstStartedPulling="2026-01-30 13:52:23.022995484 +0000 UTC m=+553.724343975" lastFinishedPulling="2026-01-30 13:52:26.019371714 +0000 UTC m=+556.720720205" observedRunningTime="2026-01-30 13:52:27.098561092 +0000 UTC m=+557.799909593" watchObservedRunningTime="2026-01-30 13:52:27.100989404 +0000 UTC m=+557.802337895" Jan 30 13:52:29 crc kubenswrapper[4793]: I0130 13:52:29.076606 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5rxw" event={"ID":"6be7bc1b-60e4-429d-b706-90063b00442e","Type":"ContainerStarted","Data":"c0284e5136e09cf729226e342eaaf5612bc1f32f83f8b477abd5086512267844"} Jan 30 13:52:32 crc kubenswrapper[4793]: I0130 13:52:32.458853 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:32 crc kubenswrapper[4793]: I0130 13:52:32.459542 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:32 crc kubenswrapper[4793]: I0130 13:52:32.508628 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:32 crc kubenswrapper[4793]: I0130 13:52:32.541958 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t5rxw" podStartSLOduration=5.366903919 podStartE2EDuration="9.541937715s" podCreationTimestamp="2026-01-30 13:52:23 +0000 UTC" firstStartedPulling="2026-01-30 13:52:24.030170711 +0000 UTC m=+554.731519212" lastFinishedPulling="2026-01-30 13:52:28.205204517 +0000 UTC m=+558.906553008" observedRunningTime="2026-01-30 13:52:30.103584191 +0000 UTC m=+560.804932682" watchObservedRunningTime="2026-01-30 13:52:32.541937715 +0000 UTC m=+563.243286206" Jan 30 13:52:33 crc kubenswrapper[4793]: I0130 13:52:33.099169 4793 generic.go:334] "Generic (PLEG): container finished" podID="adcaff8e-ed88-4fa1-af55-aedc60d35481" containerID="42b2099c6c78fdddab0dab33f7a437e712ef0090700cd534c972f42d6ab5e5e7" exitCode=0 Jan 30 13:52:33 crc kubenswrapper[4793]: I0130 13:52:33.099371 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcb4v" event={"ID":"adcaff8e-ed88-4fa1-af55-aedc60d35481","Type":"ContainerDied","Data":"42b2099c6c78fdddab0dab33f7a437e712ef0090700cd534c972f42d6ab5e5e7"} Jan 30 13:52:33 crc kubenswrapper[4793]: I0130 13:52:33.140145 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rgznc" Jan 30 13:52:33 crc kubenswrapper[4793]: I0130 13:52:33.463486 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:33 crc kubenswrapper[4793]: I0130 13:52:33.463549 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:34 crc kubenswrapper[4793]: I0130 13:52:34.107741 4793 generic.go:334] "Generic (PLEG): container finished" podID="4a0cd3b8-afdf-4eb1-b818-565ce4d0647d" containerID="90129e008a4dc89b51e60eb13c1d26e28f5c7cdce257c5589da14191ad251cb2" exitCode=0 Jan 30 13:52:34 crc kubenswrapper[4793]: I0130 13:52:34.108290 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67xsr" event={"ID":"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d","Type":"ContainerDied","Data":"90129e008a4dc89b51e60eb13c1d26e28f5c7cdce257c5589da14191ad251cb2"} Jan 30 13:52:34 crc kubenswrapper[4793]: I0130 13:52:34.113995 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcb4v" event={"ID":"adcaff8e-ed88-4fa1-af55-aedc60d35481","Type":"ContainerStarted","Data":"09eaeff79843cbfc2f9ffb76f9a605c453689a058df45abd066d2424f46b5c4d"} Jan 30 13:52:34 crc kubenswrapper[4793]: I0130 13:52:34.153788 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lcb4v" podStartSLOduration=2.291167096 podStartE2EDuration="10.153771924s" podCreationTimestamp="2026-01-30 13:52:24 +0000 UTC" firstStartedPulling="2026-01-30 13:52:26.057586657 +0000 UTC m=+556.758935148" lastFinishedPulling="2026-01-30 13:52:33.920191485 +0000 UTC m=+564.621539976" observedRunningTime="2026-01-30 13:52:34.148191562 +0000 UTC m=+564.849540063" watchObservedRunningTime="2026-01-30 13:52:34.153771924 +0000 UTC m=+564.855120435" Jan 30 13:52:34 crc kubenswrapper[4793]: I0130 13:52:34.500641 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t5rxw" podUID="6be7bc1b-60e4-429d-b706-90063b00442e" containerName="registry-server" probeResult="failure" output=< Jan 30 13:52:34 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 13:52:34 crc kubenswrapper[4793]: > Jan 30 13:52:34 crc kubenswrapper[4793]: I0130 13:52:34.872972 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:34 crc kubenswrapper[4793]: I0130 13:52:34.873022 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:35 crc kubenswrapper[4793]: I0130 13:52:35.126183 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67xsr" event={"ID":"4a0cd3b8-afdf-4eb1-b818-565ce4d0647d","Type":"ContainerStarted","Data":"4eb4333cd4336b298ad678a984117026d91a1b15197428779efc1835b346a1ef"} Jan 30 13:52:35 crc kubenswrapper[4793]: I0130 13:52:35.148857 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-67xsr" podStartSLOduration=2.467095852 podStartE2EDuration="10.148838262s" podCreationTimestamp="2026-01-30 13:52:25 +0000 UTC" firstStartedPulling="2026-01-30 13:52:27.064969818 +0000 UTC m=+557.766318309" lastFinishedPulling="2026-01-30 13:52:34.746712228 +0000 UTC m=+565.448060719" observedRunningTime="2026-01-30 13:52:35.143793344 +0000 UTC m=+565.845141855" watchObservedRunningTime="2026-01-30 13:52:35.148838262 +0000 UTC m=+565.850186753" Jan 30 13:52:35 crc kubenswrapper[4793]: I0130 13:52:35.875872 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:35 crc kubenswrapper[4793]: I0130 13:52:35.875983 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:35 crc kubenswrapper[4793]: I0130 13:52:35.917593 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-lcb4v" podUID="adcaff8e-ed88-4fa1-af55-aedc60d35481" containerName="registry-server" probeResult="failure" output=< Jan 30 13:52:35 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 13:52:35 crc kubenswrapper[4793]: > Jan 30 13:52:36 crc kubenswrapper[4793]: I0130 13:52:36.927910 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-67xsr" podUID="4a0cd3b8-afdf-4eb1-b818-565ce4d0647d" containerName="registry-server" probeResult="failure" output=< Jan 30 13:52:36 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 13:52:36 crc kubenswrapper[4793]: > Jan 30 13:52:42 crc kubenswrapper[4793]: I0130 13:52:42.413496 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:52:42 crc kubenswrapper[4793]: I0130 13:52:42.414075 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:52:43 crc kubenswrapper[4793]: I0130 13:52:43.503561 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:43 crc kubenswrapper[4793]: I0130 13:52:43.555415 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t5rxw" Jan 30 13:52:44 crc kubenswrapper[4793]: I0130 13:52:44.915992 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:44 crc kubenswrapper[4793]: I0130 13:52:44.960367 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lcb4v" Jan 30 13:52:45 crc kubenswrapper[4793]: I0130 13:52:45.913906 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:52:45 crc kubenswrapper[4793]: I0130 13:52:45.950773 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-67xsr" Jan 30 13:53:12 crc kubenswrapper[4793]: I0130 13:53:12.414037 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:53:12 crc kubenswrapper[4793]: I0130 13:53:12.414612 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:53:42 crc kubenswrapper[4793]: I0130 13:53:42.429633 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:53:42 crc kubenswrapper[4793]: I0130 13:53:42.430434 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:53:42 crc kubenswrapper[4793]: I0130 13:53:42.445240 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:53:42 crc kubenswrapper[4793]: I0130 13:53:42.446183 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da1bd3d911e39105fb6fe0014eb41a36c6a445fb3c02ca872cc47e861a75515a"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:53:42 crc kubenswrapper[4793]: I0130 13:53:42.446387 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://da1bd3d911e39105fb6fe0014eb41a36c6a445fb3c02ca872cc47e861a75515a" gracePeriod=600 Jan 30 13:53:43 crc kubenswrapper[4793]: I0130 13:53:43.490167 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="da1bd3d911e39105fb6fe0014eb41a36c6a445fb3c02ca872cc47e861a75515a" exitCode=0 Jan 30 13:53:43 crc kubenswrapper[4793]: I0130 13:53:43.490232 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"da1bd3d911e39105fb6fe0014eb41a36c6a445fb3c02ca872cc47e861a75515a"} Jan 30 13:53:43 crc kubenswrapper[4793]: I0130 13:53:43.490566 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"b9cf45bf1a50275470b74653bea158e128b7fd786c16cf7d32b21f4133fd1baa"} Jan 30 13:53:43 crc kubenswrapper[4793]: I0130 13:53:43.490597 4793 scope.go:117] "RemoveContainer" containerID="eb80942b6e6f56f06d5a97a5c92cee45946524669b2d3f8777363114c1c78ea4" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.437727 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jbshc"] Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.439266 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.463351 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jbshc"] Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.609836 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a004a105-a29f-46a5-958e-6cf954856c97-registry-certificates\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.610079 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-registry-tls\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.610158 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-bound-sa-token\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.610250 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp5rt\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-kube-api-access-vp5rt\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.610327 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a004a105-a29f-46a5-958e-6cf954856c97-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.610395 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.610499 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a004a105-a29f-46a5-958e-6cf954856c97-trusted-ca\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.610596 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a004a105-a29f-46a5-958e-6cf954856c97-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.632143 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.711587 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a004a105-a29f-46a5-958e-6cf954856c97-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.711641 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a004a105-a29f-46a5-958e-6cf954856c97-trusted-ca\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.711667 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a004a105-a29f-46a5-958e-6cf954856c97-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.711713 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a004a105-a29f-46a5-958e-6cf954856c97-registry-certificates\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.711748 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-bound-sa-token\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.711767 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-registry-tls\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.711826 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp5rt\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-kube-api-access-vp5rt\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.712107 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a004a105-a29f-46a5-958e-6cf954856c97-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.712951 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a004a105-a29f-46a5-958e-6cf954856c97-trusted-ca\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.713368 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a004a105-a29f-46a5-958e-6cf954856c97-registry-certificates\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.717580 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-registry-tls\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.719247 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a004a105-a29f-46a5-958e-6cf954856c97-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.733728 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp5rt\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-kube-api-access-vp5rt\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.735709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a004a105-a29f-46a5-958e-6cf954856c97-bound-sa-token\") pod \"image-registry-66df7c8f76-jbshc\" (UID: \"a004a105-a29f-46a5-958e-6cf954856c97\") " pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.810158 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.973539 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jbshc"] Jan 30 13:55:15 crc kubenswrapper[4793]: I0130 13:55:15.990553 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" event={"ID":"a004a105-a29f-46a5-958e-6cf954856c97","Type":"ContainerStarted","Data":"7d3b158012fa8515ed07746109da8437d41fd316e57ace5b89c602b689f31ffa"} Jan 30 13:55:16 crc kubenswrapper[4793]: I0130 13:55:16.996308 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" event={"ID":"a004a105-a29f-46a5-958e-6cf954856c97","Type":"ContainerStarted","Data":"0ec34bbd1fa059ca7e9d8a36a858bc7600a2a06d56ef4741c5ab335490255299"} Jan 30 13:55:16 crc kubenswrapper[4793]: I0130 13:55:16.996601 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:17 crc kubenswrapper[4793]: I0130 13:55:17.013445 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" podStartSLOduration=2.01341846 podStartE2EDuration="2.01341846s" podCreationTimestamp="2026-01-30 13:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:55:17.012164589 +0000 UTC m=+727.713513120" watchObservedRunningTime="2026-01-30 13:55:17.01341846 +0000 UTC m=+727.714766981" Jan 30 13:55:35 crc kubenswrapper[4793]: I0130 13:55:35.819250 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jbshc" Jan 30 13:55:35 crc kubenswrapper[4793]: I0130 13:55:35.913323 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pfnjs"] Jan 30 13:56:00 crc kubenswrapper[4793]: I0130 13:56:00.987129 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" podUID="d6e18cea-cac6-4eb8-b8de-2885fcf57497" containerName="registry" containerID="cri-o://2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3" gracePeriod=30 Jan 30 13:56:01 crc kubenswrapper[4793]: I0130 13:56:01.980904 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.107577 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6e18cea-cac6-4eb8-b8de-2885fcf57497-ca-trust-extracted\") pod \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.107635 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-tls\") pod \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.107659 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg2l5\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-kube-api-access-xg2l5\") pod \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.107815 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.107854 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-certificates\") pod \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.107880 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6e18cea-cac6-4eb8-b8de-2885fcf57497-installation-pull-secrets\") pod \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.107924 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-bound-sa-token\") pod \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.107943 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-trusted-ca\") pod \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\" (UID: \"d6e18cea-cac6-4eb8-b8de-2885fcf57497\") " Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.108950 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d6e18cea-cac6-4eb8-b8de-2885fcf57497" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.113439 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d6e18cea-cac6-4eb8-b8de-2885fcf57497" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.113524 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d6e18cea-cac6-4eb8-b8de-2885fcf57497" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.114630 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d6e18cea-cac6-4eb8-b8de-2885fcf57497" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.114857 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6e18cea-cac6-4eb8-b8de-2885fcf57497-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d6e18cea-cac6-4eb8-b8de-2885fcf57497" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.118635 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d6e18cea-cac6-4eb8-b8de-2885fcf57497" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.120559 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-kube-api-access-xg2l5" (OuterVolumeSpecName: "kube-api-access-xg2l5") pod "d6e18cea-cac6-4eb8-b8de-2885fcf57497" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497"). InnerVolumeSpecName "kube-api-access-xg2l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.124734 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6e18cea-cac6-4eb8-b8de-2885fcf57497-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d6e18cea-cac6-4eb8-b8de-2885fcf57497" (UID: "d6e18cea-cac6-4eb8-b8de-2885fcf57497"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.209746 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.209800 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.209813 4793 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6e18cea-cac6-4eb8-b8de-2885fcf57497-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.209827 4793 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.209842 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg2l5\" (UniqueName: \"kubernetes.io/projected/d6e18cea-cac6-4eb8-b8de-2885fcf57497-kube-api-access-xg2l5\") on node \"crc\" DevicePath \"\"" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.209856 4793 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6e18cea-cac6-4eb8-b8de-2885fcf57497-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.209867 4793 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6e18cea-cac6-4eb8-b8de-2885fcf57497-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.282032 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" event={"ID":"d6e18cea-cac6-4eb8-b8de-2885fcf57497","Type":"ContainerDied","Data":"2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3"} Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.282103 4793 scope.go:117] "RemoveContainer" containerID="2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.282041 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.281977 4793 generic.go:334] "Generic (PLEG): container finished" podID="d6e18cea-cac6-4eb8-b8de-2885fcf57497" containerID="2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3" exitCode=0 Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.282297 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pfnjs" event={"ID":"d6e18cea-cac6-4eb8-b8de-2885fcf57497","Type":"ContainerDied","Data":"a08f554d2033f377796937c2541b63cf2f56fd0fbab97d4b3c4a88316aa86471"} Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.303038 4793 scope.go:117] "RemoveContainer" containerID="2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3" Jan 30 13:56:02 crc kubenswrapper[4793]: E0130 13:56:02.303604 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3\": container with ID starting with 2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3 not found: ID does not exist" containerID="2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.303640 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3"} err="failed to get container status \"2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3\": rpc error: code = NotFound desc = could not find container \"2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3\": container with ID starting with 2000288c6511a60cff9a2a9e6da07c95093e36c32626965fd00a98f50542fbe3 not found: ID does not exist" Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.315503 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pfnjs"] Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.323830 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pfnjs"] Jan 30 13:56:02 crc kubenswrapper[4793]: I0130 13:56:02.415380 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6e18cea-cac6-4eb8-b8de-2885fcf57497" path="/var/lib/kubelet/pods/d6e18cea-cac6-4eb8-b8de-2885fcf57497/volumes" Jan 30 13:56:10 crc kubenswrapper[4793]: I0130 13:56:10.909579 4793 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 13:56:12 crc kubenswrapper[4793]: I0130 13:56:12.413469 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:56:12 crc kubenswrapper[4793]: I0130 13:56:12.413764 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:56:42 crc kubenswrapper[4793]: I0130 13:56:42.413975 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:56:42 crc kubenswrapper[4793]: I0130 13:56:42.414854 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:57:12 crc kubenswrapper[4793]: I0130 13:57:12.414214 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:57:12 crc kubenswrapper[4793]: I0130 13:57:12.414942 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:57:12 crc kubenswrapper[4793]: I0130 13:57:12.415020 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 13:57:12 crc kubenswrapper[4793]: I0130 13:57:12.416000 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9cf45bf1a50275470b74653bea158e128b7fd786c16cf7d32b21f4133fd1baa"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:57:12 crc kubenswrapper[4793]: I0130 13:57:12.416127 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://b9cf45bf1a50275470b74653bea158e128b7fd786c16cf7d32b21f4133fd1baa" gracePeriod=600 Jan 30 13:57:12 crc kubenswrapper[4793]: I0130 13:57:12.663774 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="b9cf45bf1a50275470b74653bea158e128b7fd786c16cf7d32b21f4133fd1baa" exitCode=0 Jan 30 13:57:12 crc kubenswrapper[4793]: I0130 13:57:12.663814 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"b9cf45bf1a50275470b74653bea158e128b7fd786c16cf7d32b21f4133fd1baa"} Jan 30 13:57:12 crc kubenswrapper[4793]: I0130 13:57:12.663864 4793 scope.go:117] "RemoveContainer" containerID="da1bd3d911e39105fb6fe0014eb41a36c6a445fb3c02ca872cc47e861a75515a" Jan 30 13:57:13 crc kubenswrapper[4793]: I0130 13:57:13.671923 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"a70290c8d43e76215d2545599390db044bcef74601c3ab38a37df4fc1393ebad"} Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.664917 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq"] Jan 30 13:58:47 crc kubenswrapper[4793]: E0130 13:58:47.665709 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6e18cea-cac6-4eb8-b8de-2885fcf57497" containerName="registry" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.665726 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6e18cea-cac6-4eb8-b8de-2885fcf57497" containerName="registry" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.665855 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6e18cea-cac6-4eb8-b8de-2885fcf57497" containerName="registry" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.666349 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.669767 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.674445 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-fpdzl" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.674492 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.674956 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq"] Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.682320 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-26t5l"] Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.683121 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-26t5l" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.690541 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zbvxs" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.705367 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-26t5l"] Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.713007 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lm7l8"] Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.713802 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.718578 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-gjfks" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.735955 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lm7l8"] Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.858813 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7w9r\" (UniqueName: \"kubernetes.io/projected/1b680507-f432-4019-b372-d9452d89aa97-kube-api-access-n7w9r\") pod \"cert-manager-858654f9db-26t5l\" (UID: \"1b680507-f432-4019-b372-d9452d89aa97\") " pod="cert-manager/cert-manager-858654f9db-26t5l" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.858874 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td5z7\" (UniqueName: \"kubernetes.io/projected/8fd78cec-1c0f-427e-8224-4021da0ede3c-kube-api-access-td5z7\") pod \"cert-manager-cainjector-cf98fcc89-tzjhq\" (UID: \"8fd78cec-1c0f-427e-8224-4021da0ede3c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.858987 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkk56\" (UniqueName: \"kubernetes.io/projected/e88efb4a-1489-4847-adb4-230a8b5db6ef-kube-api-access-mkk56\") pod \"cert-manager-webhook-687f57d79b-lm7l8\" (UID: \"e88efb4a-1489-4847-adb4-230a8b5db6ef\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.960212 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7w9r\" (UniqueName: \"kubernetes.io/projected/1b680507-f432-4019-b372-d9452d89aa97-kube-api-access-n7w9r\") pod \"cert-manager-858654f9db-26t5l\" (UID: \"1b680507-f432-4019-b372-d9452d89aa97\") " pod="cert-manager/cert-manager-858654f9db-26t5l" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.960286 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td5z7\" (UniqueName: \"kubernetes.io/projected/8fd78cec-1c0f-427e-8224-4021da0ede3c-kube-api-access-td5z7\") pod \"cert-manager-cainjector-cf98fcc89-tzjhq\" (UID: \"8fd78cec-1c0f-427e-8224-4021da0ede3c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.960341 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkk56\" (UniqueName: \"kubernetes.io/projected/e88efb4a-1489-4847-adb4-230a8b5db6ef-kube-api-access-mkk56\") pod \"cert-manager-webhook-687f57d79b-lm7l8\" (UID: \"e88efb4a-1489-4847-adb4-230a8b5db6ef\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.987998 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7w9r\" (UniqueName: \"kubernetes.io/projected/1b680507-f432-4019-b372-d9452d89aa97-kube-api-access-n7w9r\") pod \"cert-manager-858654f9db-26t5l\" (UID: \"1b680507-f432-4019-b372-d9452d89aa97\") " pod="cert-manager/cert-manager-858654f9db-26t5l" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.988473 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td5z7\" (UniqueName: \"kubernetes.io/projected/8fd78cec-1c0f-427e-8224-4021da0ede3c-kube-api-access-td5z7\") pod \"cert-manager-cainjector-cf98fcc89-tzjhq\" (UID: \"8fd78cec-1c0f-427e-8224-4021da0ede3c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.995498 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-26t5l" Jan 30 13:58:47 crc kubenswrapper[4793]: I0130 13:58:47.995510 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkk56\" (UniqueName: \"kubernetes.io/projected/e88efb4a-1489-4847-adb4-230a8b5db6ef-kube-api-access-mkk56\") pod \"cert-manager-webhook-687f57d79b-lm7l8\" (UID: \"e88efb4a-1489-4847-adb4-230a8b5db6ef\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" Jan 30 13:58:48 crc kubenswrapper[4793]: I0130 13:58:48.027221 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" Jan 30 13:58:48 crc kubenswrapper[4793]: I0130 13:58:48.261063 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-26t5l"] Jan 30 13:58:48 crc kubenswrapper[4793]: I0130 13:58:48.272418 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:58:48 crc kubenswrapper[4793]: I0130 13:58:48.284963 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq" Jan 30 13:58:48 crc kubenswrapper[4793]: I0130 13:58:48.328078 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lm7l8"] Jan 30 13:58:48 crc kubenswrapper[4793]: W0130 13:58:48.332776 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode88efb4a_1489_4847_adb4_230a8b5db6ef.slice/crio-84f73cb258d3a393a10be90b1c927c58afe345979336c8a7a3b8934bc7a2d7ce WatchSource:0}: Error finding container 84f73cb258d3a393a10be90b1c927c58afe345979336c8a7a3b8934bc7a2d7ce: Status 404 returned error can't find the container with id 84f73cb258d3a393a10be90b1c927c58afe345979336c8a7a3b8934bc7a2d7ce Jan 30 13:58:48 crc kubenswrapper[4793]: I0130 13:58:48.500722 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq"] Jan 30 13:58:48 crc kubenswrapper[4793]: W0130 13:58:48.506770 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fd78cec_1c0f_427e_8224_4021da0ede3c.slice/crio-6f38a24bf997beecffec529a8546352b6443ec28ab341e7a7f061b606f098073 WatchSource:0}: Error finding container 6f38a24bf997beecffec529a8546352b6443ec28ab341e7a7f061b606f098073: Status 404 returned error can't find the container with id 6f38a24bf997beecffec529a8546352b6443ec28ab341e7a7f061b606f098073 Jan 30 13:58:49 crc kubenswrapper[4793]: I0130 13:58:49.195266 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" event={"ID":"e88efb4a-1489-4847-adb4-230a8b5db6ef","Type":"ContainerStarted","Data":"84f73cb258d3a393a10be90b1c927c58afe345979336c8a7a3b8934bc7a2d7ce"} Jan 30 13:58:49 crc kubenswrapper[4793]: I0130 13:58:49.196297 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-26t5l" event={"ID":"1b680507-f432-4019-b372-d9452d89aa97","Type":"ContainerStarted","Data":"aaa9e1d83c48611449eb72b512d4f2064d9ba3b681f58004fac199eadcf79da5"} Jan 30 13:58:49 crc kubenswrapper[4793]: I0130 13:58:49.197172 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq" event={"ID":"8fd78cec-1c0f-427e-8224-4021da0ede3c","Type":"ContainerStarted","Data":"6f38a24bf997beecffec529a8546352b6443ec28ab341e7a7f061b606f098073"} Jan 30 13:58:54 crc kubenswrapper[4793]: I0130 13:58:54.238277 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-26t5l" event={"ID":"1b680507-f432-4019-b372-d9452d89aa97","Type":"ContainerStarted","Data":"511706c2bbf825dd020c10e34d24be89772a8fc4cfdd2fe7554e1064cb56e985"} Jan 30 13:58:54 crc kubenswrapper[4793]: I0130 13:58:54.240574 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq" event={"ID":"8fd78cec-1c0f-427e-8224-4021da0ede3c","Type":"ContainerStarted","Data":"51acd0d3e2d331a29cc7f93cde35c33ee2f096c038936babc4e402b2afe7ac70"} Jan 30 13:58:54 crc kubenswrapper[4793]: I0130 13:58:54.244406 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" event={"ID":"e88efb4a-1489-4847-adb4-230a8b5db6ef","Type":"ContainerStarted","Data":"397ae737299c48e7407c819cec40d16557ad4ced52e09be6fb4b85c45b12a802"} Jan 30 13:58:54 crc kubenswrapper[4793]: I0130 13:58:54.245082 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" Jan 30 13:58:54 crc kubenswrapper[4793]: I0130 13:58:54.251667 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-26t5l" podStartSLOduration=1.895034172 podStartE2EDuration="7.251647163s" podCreationTimestamp="2026-01-30 13:58:47 +0000 UTC" firstStartedPulling="2026-01-30 13:58:48.272175302 +0000 UTC m=+938.973523793" lastFinishedPulling="2026-01-30 13:58:53.628788293 +0000 UTC m=+944.330136784" observedRunningTime="2026-01-30 13:58:54.24990043 +0000 UTC m=+944.951248931" watchObservedRunningTime="2026-01-30 13:58:54.251647163 +0000 UTC m=+944.952995654" Jan 30 13:58:54 crc kubenswrapper[4793]: I0130 13:58:54.268711 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" podStartSLOduration=2.005806527 podStartE2EDuration="7.26868555s" podCreationTimestamp="2026-01-30 13:58:47 +0000 UTC" firstStartedPulling="2026-01-30 13:58:48.336431027 +0000 UTC m=+939.037779518" lastFinishedPulling="2026-01-30 13:58:53.59931005 +0000 UTC m=+944.300658541" observedRunningTime="2026-01-30 13:58:54.267508291 +0000 UTC m=+944.968856782" watchObservedRunningTime="2026-01-30 13:58:54.26868555 +0000 UTC m=+944.970034041" Jan 30 13:58:54 crc kubenswrapper[4793]: I0130 13:58:54.298627 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tzjhq" podStartSLOduration=2.193016778 podStartE2EDuration="7.298607714s" podCreationTimestamp="2026-01-30 13:58:47 +0000 UTC" firstStartedPulling="2026-01-30 13:58:48.509031949 +0000 UTC m=+939.210380430" lastFinishedPulling="2026-01-30 13:58:53.614622875 +0000 UTC m=+944.315971366" observedRunningTime="2026-01-30 13:58:54.297530068 +0000 UTC m=+944.998878569" watchObservedRunningTime="2026-01-30 13:58:54.298607714 +0000 UTC m=+944.999956215" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.253106 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g62p5"] Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.254421 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovn-controller" containerID="cri-o://cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071" gracePeriod=30 Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.255102 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="sbdb" containerID="cri-o://1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4" gracePeriod=30 Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.255206 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="nbdb" containerID="cri-o://34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0" gracePeriod=30 Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.255276 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="northd" containerID="cri-o://7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320" gracePeriod=30 Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.255309 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kube-rbac-proxy-node" containerID="cri-o://3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05" gracePeriod=30 Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.255427 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovn-acl-logging" containerID="cri-o://8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6" gracePeriod=30 Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.255441 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32" gracePeriod=30 Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.331571 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" containerID="cri-o://970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" gracePeriod=30 Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.608352 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/3.log" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.610856 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovn-acl-logging/0.log" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.611469 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovn-controller/0.log" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.612015 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673318 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2kfl2"] Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673579 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kubecfg-setup" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673601 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kubecfg-setup" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673615 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673623 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673630 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="nbdb" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673638 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="nbdb" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673650 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673657 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673666 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="northd" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673673 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="northd" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673683 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673691 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673699 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="sbdb" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673708 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="sbdb" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673721 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kube-rbac-proxy-node" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673728 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kube-rbac-proxy-node" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673740 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovn-acl-logging" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673747 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovn-acl-logging" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673759 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673766 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673778 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673785 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.673799 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovn-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673806 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovn-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673919 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovn-acl-logging" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673929 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="nbdb" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673941 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673949 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673958 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673969 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673980 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="sbdb" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.673993 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovn-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.674001 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="northd" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.674014 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="kube-rbac-proxy-node" Jan 30 13:58:57 crc kubenswrapper[4793]: E0130 13:58:57.674297 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.674308 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.674395 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.674589 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerName="ovnkube-controller" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.675868 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.703611 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-slash\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.703857 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-script-lib\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.703978 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-openvswitch\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.704092 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-systemd\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.704413 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-var-lib-openvswitch\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.704540 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-netd\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.704648 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-etc-openvswitch\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.704756 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-netns\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.704876 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8km7w\" (UniqueName: \"kubernetes.io/projected/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-kube-api-access-8km7w\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.705057 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-log-socket\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.705154 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-ovn\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.705354 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-systemd-units\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.705474 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovn-node-metrics-cert\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.705586 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.705640 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-slash" (OuterVolumeSpecName: "host-slash") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.705749 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-kubelet\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.705884 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-ovn-kubernetes\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706000 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-env-overrides\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706126 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-node-log\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706244 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-bin\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706351 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-config\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706474 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\" (UID: \"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e\") " Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706909 4793 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707760 4793 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706069 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706091 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706513 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706969 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.706996 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707032 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707063 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707095 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707260 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-node-log" (OuterVolumeSpecName: "node-log") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707263 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-log-socket" (OuterVolumeSpecName: "log-socket") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707282 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707292 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707321 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707665 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.707708 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.711870 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.711970 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-kube-api-access-8km7w" (OuterVolumeSpecName: "kube-api-access-8km7w") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "kube-api-access-8km7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.719240 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" (UID: "5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.808968 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-cni-netd\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809012 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-node-log\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809099 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-etc-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809118 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-systemd\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809182 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-ovnkube-script-lib\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809220 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-log-socket\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809244 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7796f\" (UniqueName: \"kubernetes.io/projected/342be2df-69a2-48ac-bad1-4445129ba471-kube-api-access-7796f\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809271 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-kubelet\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809313 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809350 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-cni-bin\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809368 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-run-netns\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809387 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-ovnkube-config\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809409 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-var-lib-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809443 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-ovn\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809525 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-systemd-units\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809597 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-env-overrides\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809623 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-slash\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809650 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/342be2df-69a2-48ac-bad1-4445129ba471-ovn-node-metrics-cert\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809671 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-run-ovn-kubernetes\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809696 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809740 4793 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809752 4793 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809762 4793 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809773 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809784 4793 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809794 4793 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809803 4793 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809813 4793 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809822 4793 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809832 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809842 4793 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809853 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809863 4793 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809871 4793 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809881 4793 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809891 4793 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809902 4793 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.809911 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8km7w\" (UniqueName: \"kubernetes.io/projected/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e-kube-api-access-8km7w\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911397 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-ovnkube-config\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911435 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-var-lib-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911454 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-ovn\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911473 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-systemd-units\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911494 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-env-overrides\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911509 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-slash\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911526 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/342be2df-69a2-48ac-bad1-4445129ba471-ovn-node-metrics-cert\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911540 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-run-ovn-kubernetes\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911563 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911579 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-node-log\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911595 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-cni-netd\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911618 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-etc-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911636 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-systemd\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911654 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-ovnkube-script-lib\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911666 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-log-socket\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911680 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7796f\" (UniqueName: \"kubernetes.io/projected/342be2df-69a2-48ac-bad1-4445129ba471-kube-api-access-7796f\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911694 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-kubelet\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911709 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911728 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-cni-bin\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911749 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-run-netns\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.911805 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-run-netns\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912349 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-node-log\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912428 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-cni-netd\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912439 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-kubelet\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912401 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-log-socket\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912480 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-run-ovn-kubernetes\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912507 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-systemd\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912488 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-etc-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912656 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-ovnkube-config\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912701 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912733 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-cni-bin\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912763 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912771 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-run-ovn\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912776 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-env-overrides\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912795 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-systemd-units\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.912806 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-var-lib-openvswitch\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.913062 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/342be2df-69a2-48ac-bad1-4445129ba471-host-slash\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.913199 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/342be2df-69a2-48ac-bad1-4445129ba471-ovnkube-script-lib\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.916507 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/342be2df-69a2-48ac-bad1-4445129ba471-ovn-node-metrics-cert\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.927001 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7796f\" (UniqueName: \"kubernetes.io/projected/342be2df-69a2-48ac-bad1-4445129ba471-kube-api-access-7796f\") pod \"ovnkube-node-2kfl2\" (UID: \"342be2df-69a2-48ac-bad1-4445129ba471\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:57 crc kubenswrapper[4793]: I0130 13:58:57.988831 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:58:58 crc kubenswrapper[4793]: W0130 13:58:58.004791 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod342be2df_69a2_48ac_bad1_4445129ba471.slice/crio-387b33408865d60d5a4774ab0317e125f6eb2a216ce7a7e37e120be573a1a3f7 WatchSource:0}: Error finding container 387b33408865d60d5a4774ab0317e125f6eb2a216ce7a7e37e120be573a1a3f7: Status 404 returned error can't find the container with id 387b33408865d60d5a4774ab0317e125f6eb2a216ce7a7e37e120be573a1a3f7 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.031990 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.280200 4793 generic.go:334] "Generic (PLEG): container finished" podID="342be2df-69a2-48ac-bad1-4445129ba471" containerID="88b8e73ada383f6ab1bbf6341550ed0c3856aadbb0adf3493033cfe1f554513d" exitCode=0 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.280298 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerDied","Data":"88b8e73ada383f6ab1bbf6341550ed0c3856aadbb0adf3493033cfe1f554513d"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.280334 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"387b33408865d60d5a4774ab0317e125f6eb2a216ce7a7e37e120be573a1a3f7"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.283030 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovnkube-controller/3.log" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.285740 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovn-acl-logging/0.log" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286326 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g62p5_5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/ovn-controller/0.log" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286711 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" exitCode=0 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286747 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4" exitCode=0 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286756 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0" exitCode=0 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286767 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320" exitCode=0 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286775 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32" exitCode=0 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286783 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05" exitCode=0 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286791 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6" exitCode=143 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286800 4793 generic.go:334] "Generic (PLEG): container finished" podID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" containerID="cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071" exitCode=143 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286848 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286879 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286895 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286907 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286921 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286935 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286947 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286960 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286967 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286974 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286980 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286986 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286992 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.286998 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287005 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287013 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287025 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287032 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287039 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287076 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287083 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287091 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287097 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287105 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287112 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287119 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287129 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287142 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287150 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287157 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287164 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287171 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287177 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287183 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287189 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287198 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287204 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287214 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" event={"ID":"5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e","Type":"ContainerDied","Data":"483688d83c9fd52a9c7106da5a4bf9f5c29a0ecb4d0a52164165da4e2be17cc3"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287224 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287233 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287241 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287247 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287254 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287260 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287266 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287272 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287278 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287285 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287164 4793 scope.go:117] "RemoveContainer" containerID="970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.287150 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g62p5" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.293976 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/2.log" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.294452 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/1.log" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.294504 4793 generic.go:334] "Generic (PLEG): container finished" podID="3e8d16db-eb58-4895-8c24-47d6f12b1ea4" containerID="bfdf4f4d87575310b5571ad8d96eada9a0f6637ad77b4d2c2367210b2d703abd" exitCode=2 Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.294529 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ssnl" event={"ID":"3e8d16db-eb58-4895-8c24-47d6f12b1ea4","Type":"ContainerDied","Data":"bfdf4f4d87575310b5571ad8d96eada9a0f6637ad77b4d2c2367210b2d703abd"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.294577 4793 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d"} Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.294984 4793 scope.go:117] "RemoveContainer" containerID="bfdf4f4d87575310b5571ad8d96eada9a0f6637ad77b4d2c2367210b2d703abd" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.340763 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.372564 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g62p5"] Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.376238 4793 scope.go:117] "RemoveContainer" containerID="1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.396078 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g62p5"] Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.406583 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e" path="/var/lib/kubelet/pods/5312f1f3-1363-47d2-ac5d-1c66fe7f8f1e/volumes" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.407231 4793 scope.go:117] "RemoveContainer" containerID="34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.438032 4793 scope.go:117] "RemoveContainer" containerID="7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.455516 4793 scope.go:117] "RemoveContainer" containerID="ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.478979 4793 scope.go:117] "RemoveContainer" containerID="3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.504422 4793 scope.go:117] "RemoveContainer" containerID="8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.531633 4793 scope.go:117] "RemoveContainer" containerID="cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.549573 4793 scope.go:117] "RemoveContainer" containerID="1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.562323 4793 scope.go:117] "RemoveContainer" containerID="970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.562614 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": container with ID starting with 970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26 not found: ID does not exist" containerID="970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.562649 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} err="failed to get container status \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": rpc error: code = NotFound desc = could not find container \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": container with ID starting with 970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.562671 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.563007 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": container with ID starting with e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a not found: ID does not exist" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.563137 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} err="failed to get container status \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": rpc error: code = NotFound desc = could not find container \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": container with ID starting with e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.563159 4793 scope.go:117] "RemoveContainer" containerID="1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.563390 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": container with ID starting with 1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4 not found: ID does not exist" containerID="1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.563419 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} err="failed to get container status \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": rpc error: code = NotFound desc = could not find container \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": container with ID starting with 1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.563439 4793 scope.go:117] "RemoveContainer" containerID="34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.563604 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": container with ID starting with 34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0 not found: ID does not exist" containerID="34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.563632 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} err="failed to get container status \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": rpc error: code = NotFound desc = could not find container \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": container with ID starting with 34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.563649 4793 scope.go:117] "RemoveContainer" containerID="7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.563882 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": container with ID starting with 7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320 not found: ID does not exist" containerID="7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.563925 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} err="failed to get container status \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": rpc error: code = NotFound desc = could not find container \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": container with ID starting with 7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.563944 4793 scope.go:117] "RemoveContainer" containerID="ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.564302 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": container with ID starting with ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32 not found: ID does not exist" containerID="ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.564324 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} err="failed to get container status \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": rpc error: code = NotFound desc = could not find container \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": container with ID starting with ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.564361 4793 scope.go:117] "RemoveContainer" containerID="3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.564552 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": container with ID starting with 3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05 not found: ID does not exist" containerID="3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.564574 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} err="failed to get container status \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": rpc error: code = NotFound desc = could not find container \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": container with ID starting with 3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.564590 4793 scope.go:117] "RemoveContainer" containerID="8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.564769 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": container with ID starting with 8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6 not found: ID does not exist" containerID="8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.564790 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} err="failed to get container status \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": rpc error: code = NotFound desc = could not find container \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": container with ID starting with 8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.564807 4793 scope.go:117] "RemoveContainer" containerID="cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.565016 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": container with ID starting with cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071 not found: ID does not exist" containerID="cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.565055 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} err="failed to get container status \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": rpc error: code = NotFound desc = could not find container \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": container with ID starting with cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.565073 4793 scope.go:117] "RemoveContainer" containerID="1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9" Jan 30 13:58:58 crc kubenswrapper[4793]: E0130 13:58:58.565426 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": container with ID starting with 1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9 not found: ID does not exist" containerID="1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.565484 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} err="failed to get container status \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": rpc error: code = NotFound desc = could not find container \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": container with ID starting with 1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.565502 4793 scope.go:117] "RemoveContainer" containerID="970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.565738 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} err="failed to get container status \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": rpc error: code = NotFound desc = could not find container \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": container with ID starting with 970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.565762 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.566093 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} err="failed to get container status \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": rpc error: code = NotFound desc = could not find container \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": container with ID starting with e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.566119 4793 scope.go:117] "RemoveContainer" containerID="1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.566459 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} err="failed to get container status \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": rpc error: code = NotFound desc = could not find container \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": container with ID starting with 1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.566500 4793 scope.go:117] "RemoveContainer" containerID="34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.566792 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} err="failed to get container status \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": rpc error: code = NotFound desc = could not find container \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": container with ID starting with 34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.566811 4793 scope.go:117] "RemoveContainer" containerID="7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.567138 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} err="failed to get container status \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": rpc error: code = NotFound desc = could not find container \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": container with ID starting with 7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.567159 4793 scope.go:117] "RemoveContainer" containerID="ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.567493 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} err="failed to get container status \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": rpc error: code = NotFound desc = could not find container \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": container with ID starting with ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.567535 4793 scope.go:117] "RemoveContainer" containerID="3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.567801 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} err="failed to get container status \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": rpc error: code = NotFound desc = could not find container \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": container with ID starting with 3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.567825 4793 scope.go:117] "RemoveContainer" containerID="8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.568143 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} err="failed to get container status \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": rpc error: code = NotFound desc = could not find container \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": container with ID starting with 8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.568166 4793 scope.go:117] "RemoveContainer" containerID="cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.568493 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} err="failed to get container status \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": rpc error: code = NotFound desc = could not find container \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": container with ID starting with cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.568517 4793 scope.go:117] "RemoveContainer" containerID="1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.568761 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} err="failed to get container status \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": rpc error: code = NotFound desc = could not find container \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": container with ID starting with 1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.568784 4793 scope.go:117] "RemoveContainer" containerID="970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.569285 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} err="failed to get container status \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": rpc error: code = NotFound desc = could not find container \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": container with ID starting with 970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.569326 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.569673 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} err="failed to get container status \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": rpc error: code = NotFound desc = could not find container \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": container with ID starting with e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.569731 4793 scope.go:117] "RemoveContainer" containerID="1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.570072 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} err="failed to get container status \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": rpc error: code = NotFound desc = could not find container \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": container with ID starting with 1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.570094 4793 scope.go:117] "RemoveContainer" containerID="34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.570659 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} err="failed to get container status \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": rpc error: code = NotFound desc = could not find container \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": container with ID starting with 34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.570692 4793 scope.go:117] "RemoveContainer" containerID="7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.571031 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} err="failed to get container status \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": rpc error: code = NotFound desc = could not find container \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": container with ID starting with 7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.571081 4793 scope.go:117] "RemoveContainer" containerID="ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.571451 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} err="failed to get container status \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": rpc error: code = NotFound desc = could not find container \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": container with ID starting with ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.571478 4793 scope.go:117] "RemoveContainer" containerID="3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.572333 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} err="failed to get container status \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": rpc error: code = NotFound desc = could not find container \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": container with ID starting with 3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.572378 4793 scope.go:117] "RemoveContainer" containerID="8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.572604 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} err="failed to get container status \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": rpc error: code = NotFound desc = could not find container \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": container with ID starting with 8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.572632 4793 scope.go:117] "RemoveContainer" containerID="cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.572995 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} err="failed to get container status \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": rpc error: code = NotFound desc = could not find container \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": container with ID starting with cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.573021 4793 scope.go:117] "RemoveContainer" containerID="1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.573332 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} err="failed to get container status \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": rpc error: code = NotFound desc = could not find container \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": container with ID starting with 1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.573364 4793 scope.go:117] "RemoveContainer" containerID="970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.573635 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} err="failed to get container status \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": rpc error: code = NotFound desc = could not find container \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": container with ID starting with 970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.573681 4793 scope.go:117] "RemoveContainer" containerID="e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.573995 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a"} err="failed to get container status \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": rpc error: code = NotFound desc = could not find container \"e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a\": container with ID starting with e7d8ce9772ce69ba45d578dbab6d2d676b6b8e0f0c1dd8525865700c059a999a not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.574018 4793 scope.go:117] "RemoveContainer" containerID="1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.574300 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4"} err="failed to get container status \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": rpc error: code = NotFound desc = could not find container \"1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4\": container with ID starting with 1d09120085d5a3ac832136736363e10b6aa9cfc9899c42f4a33ff2707021e3b4 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.574323 4793 scope.go:117] "RemoveContainer" containerID="34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.574668 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0"} err="failed to get container status \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": rpc error: code = NotFound desc = could not find container \"34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0\": container with ID starting with 34ca9a702232931e115bbcd0e051827116ac43513126b182bc321af11fc47ca0 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.574692 4793 scope.go:117] "RemoveContainer" containerID="7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.574926 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320"} err="failed to get container status \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": rpc error: code = NotFound desc = could not find container \"7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320\": container with ID starting with 7f21667b772939f2b6b2bc2dacb5b7a15474b43f805612428e80f0db02064320 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.574946 4793 scope.go:117] "RemoveContainer" containerID="ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.575193 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32"} err="failed to get container status \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": rpc error: code = NotFound desc = could not find container \"ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32\": container with ID starting with ccbeced1507ed93b88da972b409575395a7f9469133e5d3f9ac43e49f576cf32 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.575222 4793 scope.go:117] "RemoveContainer" containerID="3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.575512 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05"} err="failed to get container status \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": rpc error: code = NotFound desc = could not find container \"3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05\": container with ID starting with 3a22e03123f3f06e30b7b56bd5b1030300bc7e230046c2a50f058bdcb4ae0d05 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.575537 4793 scope.go:117] "RemoveContainer" containerID="8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.575830 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6"} err="failed to get container status \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": rpc error: code = NotFound desc = could not find container \"8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6\": container with ID starting with 8b931783d27e7b9194579b9592446d7716323b755562e0b33a95462fc7dcf7d6 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.575847 4793 scope.go:117] "RemoveContainer" containerID="cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.576106 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071"} err="failed to get container status \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": rpc error: code = NotFound desc = could not find container \"cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071\": container with ID starting with cb35dbfba6360cbe63ef00260d9cbd2426cb0264ea37db815e42229b35068071 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.576137 4793 scope.go:117] "RemoveContainer" containerID="1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.576502 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9"} err="failed to get container status \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": rpc error: code = NotFound desc = could not find container \"1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9\": container with ID starting with 1c298e3646d026ea2c9b88f800ff062ea2805424cc7aea67d179449fd269b0b9 not found: ID does not exist" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.576527 4793 scope.go:117] "RemoveContainer" containerID="970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26" Jan 30 13:58:58 crc kubenswrapper[4793]: I0130 13:58:58.576859 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26"} err="failed to get container status \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": rpc error: code = NotFound desc = could not find container \"970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26\": container with ID starting with 970d3d910386543cca07bfd1f7caebc5ff61cbe983146f82b2abc200a2aadd26 not found: ID does not exist" Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.301566 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"5d3ec634afc2a467df35090317e765b9461be46d730b25ac7a328d44f8900b8c"} Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.301869 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"3bf7a64a244e9e2cdf6016ed5599bb41e04a892904664e45a2d378e93dc7f6ff"} Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.301881 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"b6c86fa688a85fe9bd9556d6a64bc540b9e93f0598b1b67f8c975082772a5d3f"} Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.301889 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"8510d09077213d1f2a66660ce2daa05063f18c25e78618889de101f314313091"} Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.301898 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"1619c1af114c7b01dbbb7c9436c129d7eedfc249de446b630c05cd560373ae40"} Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.301907 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"26718b1a742de84bb59e907064af9f6254b9b92d21fc639e1b0a80d157b3edfe"} Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.305224 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/2.log" Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.305700 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/1.log" Jan 30 13:58:59 crc kubenswrapper[4793]: I0130 13:58:59.305743 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ssnl" event={"ID":"3e8d16db-eb58-4895-8c24-47d6f12b1ea4","Type":"ContainerStarted","Data":"27bd2894001dfffb134c2b97e60040970b8d244763407764387fc4dc4ce9b94e"} Jan 30 13:59:01 crc kubenswrapper[4793]: I0130 13:59:01.320120 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"494ffb56d5753c45465eff5c0a4d4afad318fe1bd2db9b535c17b111d4564272"} Jan 30 13:59:04 crc kubenswrapper[4793]: I0130 13:59:04.340688 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" event={"ID":"342be2df-69a2-48ac-bad1-4445129ba471","Type":"ContainerStarted","Data":"34e99456281896b91f5998355a384b746fd5232666549694dca3c3e1848c2b28"} Jan 30 13:59:04 crc kubenswrapper[4793]: I0130 13:59:04.341267 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:59:04 crc kubenswrapper[4793]: I0130 13:59:04.341365 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:59:04 crc kubenswrapper[4793]: I0130 13:59:04.341440 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:59:04 crc kubenswrapper[4793]: I0130 13:59:04.377852 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:59:04 crc kubenswrapper[4793]: I0130 13:59:04.406108 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:59:04 crc kubenswrapper[4793]: I0130 13:59:04.420444 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" podStartSLOduration=7.420427886 podStartE2EDuration="7.420427886s" podCreationTimestamp="2026-01-30 13:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:59:04.386115854 +0000 UTC m=+955.087464365" watchObservedRunningTime="2026-01-30 13:59:04.420427886 +0000 UTC m=+955.121776367" Jan 30 13:59:12 crc kubenswrapper[4793]: I0130 13:59:12.413883 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:59:12 crc kubenswrapper[4793]: I0130 13:59:12.415503 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:59:13 crc kubenswrapper[4793]: I0130 13:59:13.807872 4793 scope.go:117] "RemoveContainer" containerID="95f18526ac79b35583c3b436bf789d34e4d2907913a6856288c193760420cd7d" Jan 30 13:59:14 crc kubenswrapper[4793]: I0130 13:59:14.414603 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ssnl_3e8d16db-eb58-4895-8c24-47d6f12b1ea4/kube-multus/2.log" Jan 30 13:59:28 crc kubenswrapper[4793]: I0130 13:59:28.011943 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2kfl2" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.136674 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4"] Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.138140 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.139692 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.147626 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4"] Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.186001 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.186205 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whnkk\" (UniqueName: \"kubernetes.io/projected/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-kube-api-access-whnkk\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.186250 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.287846 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whnkk\" (UniqueName: \"kubernetes.io/projected/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-kube-api-access-whnkk\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.287914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.287963 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.288574 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.288764 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.307126 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whnkk\" (UniqueName: \"kubernetes.io/projected/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-kube-api-access-whnkk\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.459416 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.676581 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4"] Jan 30 13:59:40 crc kubenswrapper[4793]: I0130 13:59:40.971498 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" event={"ID":"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120","Type":"ContainerStarted","Data":"8e24f9b9bcb471ebd0938aeaf2a15d649b9ee08f57f2f1fa3db1889d608b6208"} Jan 30 13:59:41 crc kubenswrapper[4793]: I0130 13:59:41.981371 4793 generic.go:334] "Generic (PLEG): container finished" podID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerID="1e233a4b22b25c43b3c6a8e65ce89f7e9846533b834f32e059dfe4cdb44551b5" exitCode=0 Jan 30 13:59:41 crc kubenswrapper[4793]: I0130 13:59:41.981463 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" event={"ID":"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120","Type":"ContainerDied","Data":"1e233a4b22b25c43b3c6a8e65ce89f7e9846533b834f32e059dfe4cdb44551b5"} Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.369861 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r9xlp"] Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.371075 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.387497 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9xlp"] Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.419221 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.419281 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.429260 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-catalog-content\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.429566 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-utilities\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.429804 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knlxh\" (UniqueName: \"kubernetes.io/projected/8c59ec83-7715-4a59-a31b-b433cc9d77a7-kube-api-access-knlxh\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.531205 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-catalog-content\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.531333 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-utilities\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.531361 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knlxh\" (UniqueName: \"kubernetes.io/projected/8c59ec83-7715-4a59-a31b-b433cc9d77a7-kube-api-access-knlxh\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.531642 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-catalog-content\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.531876 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-utilities\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.550376 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knlxh\" (UniqueName: \"kubernetes.io/projected/8c59ec83-7715-4a59-a31b-b433cc9d77a7-kube-api-access-knlxh\") pod \"redhat-operators-r9xlp\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:42 crc kubenswrapper[4793]: I0130 13:59:42.690269 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:43 crc kubenswrapper[4793]: I0130 13:59:43.096908 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9xlp"] Jan 30 13:59:44 crc kubenswrapper[4793]: I0130 13:59:44.007651 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" event={"ID":"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120","Type":"ContainerStarted","Data":"1e7de3555e80880d54038395ae121bedcc6c5978b8ce7b6a1757b99f65006ac4"} Jan 30 13:59:44 crc kubenswrapper[4793]: I0130 13:59:44.010485 4793 generic.go:334] "Generic (PLEG): container finished" podID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerID="dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30" exitCode=0 Jan 30 13:59:44 crc kubenswrapper[4793]: I0130 13:59:44.010566 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9xlp" event={"ID":"8c59ec83-7715-4a59-a31b-b433cc9d77a7","Type":"ContainerDied","Data":"dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30"} Jan 30 13:59:44 crc kubenswrapper[4793]: I0130 13:59:44.010616 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9xlp" event={"ID":"8c59ec83-7715-4a59-a31b-b433cc9d77a7","Type":"ContainerStarted","Data":"a7f6cd11bf61597471d4b3cc7d761e75ee9fbc7009499720876fb6770586f0a7"} Jan 30 13:59:45 crc kubenswrapper[4793]: I0130 13:59:45.020432 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9xlp" event={"ID":"8c59ec83-7715-4a59-a31b-b433cc9d77a7","Type":"ContainerStarted","Data":"aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514"} Jan 30 13:59:45 crc kubenswrapper[4793]: I0130 13:59:45.023756 4793 generic.go:334] "Generic (PLEG): container finished" podID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerID="1e7de3555e80880d54038395ae121bedcc6c5978b8ce7b6a1757b99f65006ac4" exitCode=0 Jan 30 13:59:45 crc kubenswrapper[4793]: I0130 13:59:45.023824 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" event={"ID":"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120","Type":"ContainerDied","Data":"1e7de3555e80880d54038395ae121bedcc6c5978b8ce7b6a1757b99f65006ac4"} Jan 30 13:59:46 crc kubenswrapper[4793]: I0130 13:59:46.030826 4793 generic.go:334] "Generic (PLEG): container finished" podID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerID="aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514" exitCode=0 Jan 30 13:59:46 crc kubenswrapper[4793]: I0130 13:59:46.031519 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9xlp" event={"ID":"8c59ec83-7715-4a59-a31b-b433cc9d77a7","Type":"ContainerDied","Data":"aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514"} Jan 30 13:59:46 crc kubenswrapper[4793]: I0130 13:59:46.036306 4793 generic.go:334] "Generic (PLEG): container finished" podID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerID="bbb74e9c49a1cef752d2f80736e9c9e81375ecf59d8924bcc95c24115e7559d7" exitCode=0 Jan 30 13:59:46 crc kubenswrapper[4793]: I0130 13:59:46.036333 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" event={"ID":"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120","Type":"ContainerDied","Data":"bbb74e9c49a1cef752d2f80736e9c9e81375ecf59d8924bcc95c24115e7559d7"} Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.308517 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.398031 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whnkk\" (UniqueName: \"kubernetes.io/projected/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-kube-api-access-whnkk\") pod \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.398095 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-bundle\") pod \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.398785 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-bundle" (OuterVolumeSpecName: "bundle") pod "cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" (UID: "cd0e9042-d9db-4b5e-98b9-31ab2b3c4120"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.403111 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-kube-api-access-whnkk" (OuterVolumeSpecName: "kube-api-access-whnkk") pod "cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" (UID: "cd0e9042-d9db-4b5e-98b9-31ab2b3c4120"). InnerVolumeSpecName "kube-api-access-whnkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.498914 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-util\") pod \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\" (UID: \"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120\") " Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.499654 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whnkk\" (UniqueName: \"kubernetes.io/projected/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-kube-api-access-whnkk\") on node \"crc\" DevicePath \"\"" Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.499687 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.509664 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-util" (OuterVolumeSpecName: "util") pod "cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" (UID: "cd0e9042-d9db-4b5e-98b9-31ab2b3c4120"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:59:47 crc kubenswrapper[4793]: I0130 13:59:47.601688 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd0e9042-d9db-4b5e-98b9-31ab2b3c4120-util\") on node \"crc\" DevicePath \"\"" Jan 30 13:59:48 crc kubenswrapper[4793]: I0130 13:59:48.060631 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9xlp" event={"ID":"8c59ec83-7715-4a59-a31b-b433cc9d77a7","Type":"ContainerStarted","Data":"344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31"} Jan 30 13:59:48 crc kubenswrapper[4793]: I0130 13:59:48.063230 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" event={"ID":"cd0e9042-d9db-4b5e-98b9-31ab2b3c4120","Type":"ContainerDied","Data":"8e24f9b9bcb471ebd0938aeaf2a15d649b9ee08f57f2f1fa3db1889d608b6208"} Jan 30 13:59:48 crc kubenswrapper[4793]: I0130 13:59:48.063293 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e24f9b9bcb471ebd0938aeaf2a15d649b9ee08f57f2f1fa3db1889d608b6208" Jan 30 13:59:48 crc kubenswrapper[4793]: I0130 13:59:48.063301 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4" Jan 30 13:59:48 crc kubenswrapper[4793]: I0130 13:59:48.079692 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r9xlp" podStartSLOduration=2.566484571 podStartE2EDuration="6.079676199s" podCreationTimestamp="2026-01-30 13:59:42 +0000 UTC" firstStartedPulling="2026-01-30 13:59:44.012817947 +0000 UTC m=+994.714166468" lastFinishedPulling="2026-01-30 13:59:47.526009595 +0000 UTC m=+998.227358096" observedRunningTime="2026-01-30 13:59:48.076953442 +0000 UTC m=+998.778301953" watchObservedRunningTime="2026-01-30 13:59:48.079676199 +0000 UTC m=+998.781024700" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.601234 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9bsps"] Jan 30 13:59:51 crc kubenswrapper[4793]: E0130 13:59:51.601671 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerName="extract" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.601683 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerName="extract" Jan 30 13:59:51 crc kubenswrapper[4793]: E0130 13:59:51.601701 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerName="util" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.601707 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerName="util" Jan 30 13:59:51 crc kubenswrapper[4793]: E0130 13:59:51.601719 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerName="pull" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.601726 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerName="pull" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.601859 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0e9042-d9db-4b5e-98b9-31ab2b3c4120" containerName="extract" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.602279 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-9bsps" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.605784 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.607693 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.608541 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-96p7k" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.622229 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9bsps"] Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.662325 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz5jw\" (UniqueName: \"kubernetes.io/projected/1f691ecb-c128-4332-a7ab-c4e173490f50-kube-api-access-fz5jw\") pod \"nmstate-operator-646758c888-9bsps\" (UID: \"1f691ecb-c128-4332-a7ab-c4e173490f50\") " pod="openshift-nmstate/nmstate-operator-646758c888-9bsps" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.763330 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz5jw\" (UniqueName: \"kubernetes.io/projected/1f691ecb-c128-4332-a7ab-c4e173490f50-kube-api-access-fz5jw\") pod \"nmstate-operator-646758c888-9bsps\" (UID: \"1f691ecb-c128-4332-a7ab-c4e173490f50\") " pod="openshift-nmstate/nmstate-operator-646758c888-9bsps" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.782025 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz5jw\" (UniqueName: \"kubernetes.io/projected/1f691ecb-c128-4332-a7ab-c4e173490f50-kube-api-access-fz5jw\") pod \"nmstate-operator-646758c888-9bsps\" (UID: \"1f691ecb-c128-4332-a7ab-c4e173490f50\") " pod="openshift-nmstate/nmstate-operator-646758c888-9bsps" Jan 30 13:59:51 crc kubenswrapper[4793]: I0130 13:59:51.916445 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-9bsps" Jan 30 13:59:52 crc kubenswrapper[4793]: I0130 13:59:52.361940 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9bsps"] Jan 30 13:59:52 crc kubenswrapper[4793]: W0130 13:59:52.365236 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f691ecb_c128_4332_a7ab_c4e173490f50.slice/crio-e1bd2114db54a2f53196fb4a4b9be3df523085b8f476ac97db9de3580c6d3a42 WatchSource:0}: Error finding container e1bd2114db54a2f53196fb4a4b9be3df523085b8f476ac97db9de3580c6d3a42: Status 404 returned error can't find the container with id e1bd2114db54a2f53196fb4a4b9be3df523085b8f476ac97db9de3580c6d3a42 Jan 30 13:59:52 crc kubenswrapper[4793]: I0130 13:59:52.690797 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:52 crc kubenswrapper[4793]: I0130 13:59:52.691139 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 13:59:53 crc kubenswrapper[4793]: I0130 13:59:53.089035 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-9bsps" event={"ID":"1f691ecb-c128-4332-a7ab-c4e173490f50","Type":"ContainerStarted","Data":"e1bd2114db54a2f53196fb4a4b9be3df523085b8f476ac97db9de3580c6d3a42"} Jan 30 13:59:53 crc kubenswrapper[4793]: I0130 13:59:53.728466 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r9xlp" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="registry-server" probeResult="failure" output=< Jan 30 13:59:53 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 13:59:53 crc kubenswrapper[4793]: > Jan 30 13:59:55 crc kubenswrapper[4793]: I0130 13:59:55.100673 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-9bsps" event={"ID":"1f691ecb-c128-4332-a7ab-c4e173490f50","Type":"ContainerStarted","Data":"51b24ad2dfba71f19e3fb756dfd4769fa3df27dbc9f3d17aa8e7d977a5cd78c0"} Jan 30 13:59:55 crc kubenswrapper[4793]: I0130 13:59:55.127120 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-9bsps" podStartSLOduration=2.000234449 podStartE2EDuration="4.12708629s" podCreationTimestamp="2026-01-30 13:59:51 +0000 UTC" firstStartedPulling="2026-01-30 13:59:52.366444831 +0000 UTC m=+1003.067793322" lastFinishedPulling="2026-01-30 13:59:54.493296672 +0000 UTC m=+1005.194645163" observedRunningTime="2026-01-30 13:59:55.120465548 +0000 UTC m=+1005.821814079" watchObservedRunningTime="2026-01-30 13:59:55.12708629 +0000 UTC m=+1005.828434821" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.166750 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.168494 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.170980 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.171131 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.185798 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.370077 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0262a970-62b2-47c1-93bf-1e4455a999bf-secret-volume\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.370362 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0262a970-62b2-47c1-93bf-1e4455a999bf-config-volume\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.370527 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8s55\" (UniqueName: \"kubernetes.io/projected/0262a970-62b2-47c1-93bf-1e4455a999bf-kube-api-access-t8s55\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.471234 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0262a970-62b2-47c1-93bf-1e4455a999bf-secret-volume\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.471579 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0262a970-62b2-47c1-93bf-1e4455a999bf-config-volume\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.471753 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8s55\" (UniqueName: \"kubernetes.io/projected/0262a970-62b2-47c1-93bf-1e4455a999bf-kube-api-access-t8s55\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.472519 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0262a970-62b2-47c1-93bf-1e4455a999bf-config-volume\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.480825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0262a970-62b2-47c1-93bf-1e4455a999bf-secret-volume\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.489585 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8s55\" (UniqueName: \"kubernetes.io/projected/0262a970-62b2-47c1-93bf-1e4455a999bf-kube-api-access-t8s55\") pod \"collect-profiles-29496360-gwpwk\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.541084 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-2gwr6"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.542105 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.545956 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-gdrsf" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.551252 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.551837 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.555326 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-2gwr6"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.561654 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.572444 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/68bcadc4-02c3-44c0-a252-0606ff1f0a09-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hw489\" (UID: \"68bcadc4-02c3-44c0-a252-0606ff1f0a09\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.572496 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgstl\" (UniqueName: \"kubernetes.io/projected/1a7bdce5-b625-40ce-b674-a834fcd178a8-kube-api-access-sgstl\") pod \"nmstate-metrics-54757c584b-2gwr6\" (UID: \"1a7bdce5-b625-40ce-b674-a834fcd178a8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.572521 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpgff\" (UniqueName: \"kubernetes.io/projected/68bcadc4-02c3-44c0-a252-0606ff1f0a09-kube-api-access-vpgff\") pod \"nmstate-webhook-8474b5b9d8-hw489\" (UID: \"68bcadc4-02c3-44c0-a252-0606ff1f0a09\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.572597 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-dh9db"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.573321 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.595989 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.673640 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgstl\" (UniqueName: \"kubernetes.io/projected/1a7bdce5-b625-40ce-b674-a834fcd178a8-kube-api-access-sgstl\") pod \"nmstate-metrics-54757c584b-2gwr6\" (UID: \"1a7bdce5-b625-40ce-b674-a834fcd178a8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.673889 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpgff\" (UniqueName: \"kubernetes.io/projected/68bcadc4-02c3-44c0-a252-0606ff1f0a09-kube-api-access-vpgff\") pod \"nmstate-webhook-8474b5b9d8-hw489\" (UID: \"68bcadc4-02c3-44c0-a252-0606ff1f0a09\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.674023 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/68bcadc4-02c3-44c0-a252-0606ff1f0a09-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hw489\" (UID: \"68bcadc4-02c3-44c0-a252-0606ff1f0a09\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:00 crc kubenswrapper[4793]: E0130 14:00:00.674212 4793 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 30 14:00:00 crc kubenswrapper[4793]: E0130 14:00:00.674329 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68bcadc4-02c3-44c0-a252-0606ff1f0a09-tls-key-pair podName:68bcadc4-02c3-44c0-a252-0606ff1f0a09 nodeName:}" failed. No retries permitted until 2026-01-30 14:00:01.174312836 +0000 UTC m=+1011.875661327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/68bcadc4-02c3-44c0-a252-0606ff1f0a09-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-hw489" (UID: "68bcadc4-02c3-44c0-a252-0606ff1f0a09") : secret "openshift-nmstate-webhook" not found Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.695931 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpgff\" (UniqueName: \"kubernetes.io/projected/68bcadc4-02c3-44c0-a252-0606ff1f0a09-kube-api-access-vpgff\") pod \"nmstate-webhook-8474b5b9d8-hw489\" (UID: \"68bcadc4-02c3-44c0-a252-0606ff1f0a09\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.706821 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgstl\" (UniqueName: \"kubernetes.io/projected/1a7bdce5-b625-40ce-b674-a834fcd178a8-kube-api-access-sgstl\") pod \"nmstate-metrics-54757c584b-2gwr6\" (UID: \"1a7bdce5-b625-40ce-b674-a834fcd178a8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.774796 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-ovs-socket\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.775214 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-dbus-socket\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.775329 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-nmstate-lock\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.775440 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsj2m\" (UniqueName: \"kubernetes.io/projected/e635e428-77d8-44fb-baa4-1af4bd603c10-kube-api-access-dsj2m\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.785437 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.829397 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.830426 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.837708 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-wh5fk" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.837785 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.837973 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.850959 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft"] Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.865250 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.877915 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-ovs-socket\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.878010 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-dbus-socket\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.878102 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-nmstate-lock\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.878147 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsj2m\" (UniqueName: \"kubernetes.io/projected/e635e428-77d8-44fb-baa4-1af4bd603c10-kube-api-access-dsj2m\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.878544 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-dbus-socket\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.878630 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-nmstate-lock\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.878636 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e635e428-77d8-44fb-baa4-1af4bd603c10-ovs-socket\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.913433 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsj2m\" (UniqueName: \"kubernetes.io/projected/e635e428-77d8-44fb-baa4-1af4bd603c10-kube-api-access-dsj2m\") pod \"nmstate-handler-dh9db\" (UID: \"e635e428-77d8-44fb-baa4-1af4bd603c10\") " pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.981090 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d85w8\" (UniqueName: \"kubernetes.io/projected/5df01042-63fe-458a-b71d-d1f9bdf9ea66-kube-api-access-d85w8\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.981175 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5df01042-63fe-458a-b71d-d1f9bdf9ea66-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:00 crc kubenswrapper[4793]: I0130 14:00:00.981213 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5df01042-63fe-458a-b71d-d1f9bdf9ea66-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.086207 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d85w8\" (UniqueName: \"kubernetes.io/projected/5df01042-63fe-458a-b71d-d1f9bdf9ea66-kube-api-access-d85w8\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.086738 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5df01042-63fe-458a-b71d-d1f9bdf9ea66-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.086776 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5df01042-63fe-458a-b71d-d1f9bdf9ea66-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.089658 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5df01042-63fe-458a-b71d-d1f9bdf9ea66-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.111350 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5df01042-63fe-458a-b71d-d1f9bdf9ea66-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.132252 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5767d7b4df-v5z9l"] Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.134197 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d85w8\" (UniqueName: \"kubernetes.io/projected/5df01042-63fe-458a-b71d-d1f9bdf9ea66-kube-api-access-d85w8\") pod \"nmstate-console-plugin-7754f76f8b-kc5ft\" (UID: \"5df01042-63fe-458a-b71d-d1f9bdf9ea66\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.138241 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.152953 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5767d7b4df-v5z9l"] Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.158635 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.188460 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/68bcadc4-02c3-44c0-a252-0606ff1f0a09-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hw489\" (UID: \"68bcadc4-02c3-44c0-a252-0606ff1f0a09\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.192791 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/68bcadc4-02c3-44c0-a252-0606ff1f0a09-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hw489\" (UID: \"68bcadc4-02c3-44c0-a252-0606ff1f0a09\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.193338 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:01 crc kubenswrapper[4793]: W0130 14:00:01.220584 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode635e428_77d8_44fb_baa4_1af4bd603c10.slice/crio-c13b50673f050bf855ff9570919519a96213c7580babfc5bf70bdfb54cb3f708 WatchSource:0}: Error finding container c13b50673f050bf855ff9570919519a96213c7580babfc5bf70bdfb54cb3f708: Status 404 returned error can't find the container with id c13b50673f050bf855ff9570919519a96213c7580babfc5bf70bdfb54cb3f708 Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.246359 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk"] Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.290815 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-trusted-ca-bundle\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.290909 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-config\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.290977 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lz5g\" (UniqueName: \"kubernetes.io/projected/369f339c-5894-4bda-8e5a-aa9ef1a8456c-kube-api-access-8lz5g\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.291018 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-service-ca\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.291088 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-serving-cert\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.291117 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-oauth-config\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.291185 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-oauth-serving-cert\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.394271 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lz5g\" (UniqueName: \"kubernetes.io/projected/369f339c-5894-4bda-8e5a-aa9ef1a8456c-kube-api-access-8lz5g\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.394553 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-service-ca\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.394577 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-serving-cert\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.394596 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-oauth-config\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.394626 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-oauth-serving-cert\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.394650 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-trusted-ca-bundle\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.394682 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-config\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.396367 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-trusted-ca-bundle\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.396558 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-config\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.396764 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-service-ca\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.397279 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/369f339c-5894-4bda-8e5a-aa9ef1a8456c-oauth-serving-cert\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.398955 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-oauth-config\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.401235 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/369f339c-5894-4bda-8e5a-aa9ef1a8456c-console-serving-cert\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.418232 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lz5g\" (UniqueName: \"kubernetes.io/projected/369f339c-5894-4bda-8e5a-aa9ef1a8456c-kube-api-access-8lz5g\") pod \"console-5767d7b4df-v5z9l\" (UID: \"369f339c-5894-4bda-8e5a-aa9ef1a8456c\") " pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.464786 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.471649 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft"] Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.474589 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:01 crc kubenswrapper[4793]: W0130 14:00:01.479360 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5df01042_63fe_458a_b71d_d1f9bdf9ea66.slice/crio-4cbc0c355d70f7809c85b48a2660dfc14be8a9a4ed00e20ae46be4e03fe915d3 WatchSource:0}: Error finding container 4cbc0c355d70f7809c85b48a2660dfc14be8a9a4ed00e20ae46be4e03fe915d3: Status 404 returned error can't find the container with id 4cbc0c355d70f7809c85b48a2660dfc14be8a9a4ed00e20ae46be4e03fe915d3 Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.520724 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-2gwr6"] Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.795717 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5767d7b4df-v5z9l"] Jan 30 14:00:01 crc kubenswrapper[4793]: W0130 14:00:01.801848 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod369f339c_5894_4bda_8e5a_aa9ef1a8456c.slice/crio-5a315ccacc881b6f4694bec75498b54a0a2709ea3226dcfce64d9c8b3375227f WatchSource:0}: Error finding container 5a315ccacc881b6f4694bec75498b54a0a2709ea3226dcfce64d9c8b3375227f: Status 404 returned error can't find the container with id 5a315ccacc881b6f4694bec75498b54a0a2709ea3226dcfce64d9c8b3375227f Jan 30 14:00:01 crc kubenswrapper[4793]: I0130 14:00:01.850935 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489"] Jan 30 14:00:01 crc kubenswrapper[4793]: W0130 14:00:01.873461 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68bcadc4_02c3_44c0_a252_0606ff1f0a09.slice/crio-5b961beb4e1c8a306793310558f5be310911f219aaf1e8624108ad9e62a3b66d WatchSource:0}: Error finding container 5b961beb4e1c8a306793310558f5be310911f219aaf1e8624108ad9e62a3b66d: Status 404 returned error can't find the container with id 5b961beb4e1c8a306793310558f5be310911f219aaf1e8624108ad9e62a3b66d Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.148262 4793 generic.go:334] "Generic (PLEG): container finished" podID="0262a970-62b2-47c1-93bf-1e4455a999bf" containerID="21efee8d4521693281692f27a68228834ba45b6ab82173ff835a52b2e30855b1" exitCode=0 Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.148344 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" event={"ID":"0262a970-62b2-47c1-93bf-1e4455a999bf","Type":"ContainerDied","Data":"21efee8d4521693281692f27a68228834ba45b6ab82173ff835a52b2e30855b1"} Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.148409 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" event={"ID":"0262a970-62b2-47c1-93bf-1e4455a999bf","Type":"ContainerStarted","Data":"64c0c3a6986cd308648b3ad53f5fdb56a5e0c9ad5021668cc815471ffff6de56"} Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.149513 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dh9db" event={"ID":"e635e428-77d8-44fb-baa4-1af4bd603c10","Type":"ContainerStarted","Data":"c13b50673f050bf855ff9570919519a96213c7580babfc5bf70bdfb54cb3f708"} Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.152952 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" event={"ID":"68bcadc4-02c3-44c0-a252-0606ff1f0a09","Type":"ContainerStarted","Data":"5b961beb4e1c8a306793310558f5be310911f219aaf1e8624108ad9e62a3b66d"} Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.155834 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5767d7b4df-v5z9l" event={"ID":"369f339c-5894-4bda-8e5a-aa9ef1a8456c","Type":"ContainerStarted","Data":"c1dd9263c27873f41299a7f96df549b99d19f3103391f2126d720071631ba670"} Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.155988 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5767d7b4df-v5z9l" event={"ID":"369f339c-5894-4bda-8e5a-aa9ef1a8456c","Type":"ContainerStarted","Data":"5a315ccacc881b6f4694bec75498b54a0a2709ea3226dcfce64d9c8b3375227f"} Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.158088 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" event={"ID":"1a7bdce5-b625-40ce-b674-a834fcd178a8","Type":"ContainerStarted","Data":"da81f03cdba551cc826e13e5619ff1eaca5dc68a3ce7c54b64edcb6017ada240"} Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.159575 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" event={"ID":"5df01042-63fe-458a-b71d-d1f9bdf9ea66","Type":"ContainerStarted","Data":"4cbc0c355d70f7809c85b48a2660dfc14be8a9a4ed00e20ae46be4e03fe915d3"} Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.194944 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5767d7b4df-v5z9l" podStartSLOduration=1.194927723 podStartE2EDuration="1.194927723s" podCreationTimestamp="2026-01-30 14:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:00:02.194259667 +0000 UTC m=+1012.895608158" watchObservedRunningTime="2026-01-30 14:00:02.194927723 +0000 UTC m=+1012.896276214" Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.742894 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.793747 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 14:00:02 crc kubenswrapper[4793]: I0130 14:00:02.983200 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9xlp"] Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.400598 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.527666 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0262a970-62b2-47c1-93bf-1e4455a999bf-config-volume\") pod \"0262a970-62b2-47c1-93bf-1e4455a999bf\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.527747 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0262a970-62b2-47c1-93bf-1e4455a999bf-secret-volume\") pod \"0262a970-62b2-47c1-93bf-1e4455a999bf\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.527858 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8s55\" (UniqueName: \"kubernetes.io/projected/0262a970-62b2-47c1-93bf-1e4455a999bf-kube-api-access-t8s55\") pod \"0262a970-62b2-47c1-93bf-1e4455a999bf\" (UID: \"0262a970-62b2-47c1-93bf-1e4455a999bf\") " Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.529835 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0262a970-62b2-47c1-93bf-1e4455a999bf-config-volume" (OuterVolumeSpecName: "config-volume") pod "0262a970-62b2-47c1-93bf-1e4455a999bf" (UID: "0262a970-62b2-47c1-93bf-1e4455a999bf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.534496 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0262a970-62b2-47c1-93bf-1e4455a999bf-kube-api-access-t8s55" (OuterVolumeSpecName: "kube-api-access-t8s55") pod "0262a970-62b2-47c1-93bf-1e4455a999bf" (UID: "0262a970-62b2-47c1-93bf-1e4455a999bf"). InnerVolumeSpecName "kube-api-access-t8s55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.535207 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0262a970-62b2-47c1-93bf-1e4455a999bf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0262a970-62b2-47c1-93bf-1e4455a999bf" (UID: "0262a970-62b2-47c1-93bf-1e4455a999bf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.629611 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8s55\" (UniqueName: \"kubernetes.io/projected/0262a970-62b2-47c1-93bf-1e4455a999bf-kube-api-access-t8s55\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.629672 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0262a970-62b2-47c1-93bf-1e4455a999bf-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:03 crc kubenswrapper[4793]: I0130 14:00:03.630000 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0262a970-62b2-47c1-93bf-1e4455a999bf-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.186624 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.186784 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk" event={"ID":"0262a970-62b2-47c1-93bf-1e4455a999bf","Type":"ContainerDied","Data":"64c0c3a6986cd308648b3ad53f5fdb56a5e0c9ad5021668cc815471ffff6de56"} Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.186816 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64c0c3a6986cd308648b3ad53f5fdb56a5e0c9ad5021668cc815471ffff6de56" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.186911 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r9xlp" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="registry-server" containerID="cri-o://344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31" gracePeriod=2 Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.575668 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.647294 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-catalog-content\") pod \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.652343 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-utilities\") pod \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.652507 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knlxh\" (UniqueName: \"kubernetes.io/projected/8c59ec83-7715-4a59-a31b-b433cc9d77a7-kube-api-access-knlxh\") pod \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\" (UID: \"8c59ec83-7715-4a59-a31b-b433cc9d77a7\") " Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.653340 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-utilities" (OuterVolumeSpecName: "utilities") pod "8c59ec83-7715-4a59-a31b-b433cc9d77a7" (UID: "8c59ec83-7715-4a59-a31b-b433cc9d77a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.674239 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c59ec83-7715-4a59-a31b-b433cc9d77a7-kube-api-access-knlxh" (OuterVolumeSpecName: "kube-api-access-knlxh") pod "8c59ec83-7715-4a59-a31b-b433cc9d77a7" (UID: "8c59ec83-7715-4a59-a31b-b433cc9d77a7"). InnerVolumeSpecName "kube-api-access-knlxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.754433 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.754473 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knlxh\" (UniqueName: \"kubernetes.io/projected/8c59ec83-7715-4a59-a31b-b433cc9d77a7-kube-api-access-knlxh\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.783596 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c59ec83-7715-4a59-a31b-b433cc9d77a7" (UID: "8c59ec83-7715-4a59-a31b-b433cc9d77a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:04 crc kubenswrapper[4793]: I0130 14:00:04.855949 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c59ec83-7715-4a59-a31b-b433cc9d77a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.192833 4793 generic.go:334] "Generic (PLEG): container finished" podID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerID="344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31" exitCode=0 Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.192871 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9xlp" event={"ID":"8c59ec83-7715-4a59-a31b-b433cc9d77a7","Type":"ContainerDied","Data":"344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31"} Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.192897 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9xlp" event={"ID":"8c59ec83-7715-4a59-a31b-b433cc9d77a7","Type":"ContainerDied","Data":"a7f6cd11bf61597471d4b3cc7d761e75ee9fbc7009499720876fb6770586f0a7"} Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.192914 4793 scope.go:117] "RemoveContainer" containerID="344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.192912 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9xlp" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.226998 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9xlp"] Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.233931 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r9xlp"] Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.377561 4793 scope.go:117] "RemoveContainer" containerID="aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.403565 4793 scope.go:117] "RemoveContainer" containerID="dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.456060 4793 scope.go:117] "RemoveContainer" containerID="344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31" Jan 30 14:00:05 crc kubenswrapper[4793]: E0130 14:00:05.456598 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31\": container with ID starting with 344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31 not found: ID does not exist" containerID="344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.456630 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31"} err="failed to get container status \"344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31\": rpc error: code = NotFound desc = could not find container \"344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31\": container with ID starting with 344e6700f7ede0a1c1ec39590599a1cc7c4a7174267a6f5e4f9d50a2d3d96e31 not found: ID does not exist" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.456663 4793 scope.go:117] "RemoveContainer" containerID="aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514" Jan 30 14:00:05 crc kubenswrapper[4793]: E0130 14:00:05.457868 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514\": container with ID starting with aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514 not found: ID does not exist" containerID="aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.457889 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514"} err="failed to get container status \"aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514\": rpc error: code = NotFound desc = could not find container \"aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514\": container with ID starting with aad6159ab7a0ca9ccd62c9a80635f909b5e1e66c05d0facb402055f26d757514 not found: ID does not exist" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.457903 4793 scope.go:117] "RemoveContainer" containerID="dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30" Jan 30 14:00:05 crc kubenswrapper[4793]: E0130 14:00:05.458265 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30\": container with ID starting with dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30 not found: ID does not exist" containerID="dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30" Jan 30 14:00:05 crc kubenswrapper[4793]: I0130 14:00:05.458287 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30"} err="failed to get container status \"dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30\": rpc error: code = NotFound desc = could not find container \"dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30\": container with ID starting with dc69cee53f5f5860cbfa5e6e9d137ee33e354cc3190fd1e4759f02fcc580de30 not found: ID does not exist" Jan 30 14:00:06 crc kubenswrapper[4793]: I0130 14:00:06.198630 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" event={"ID":"68bcadc4-02c3-44c0-a252-0606ff1f0a09","Type":"ContainerStarted","Data":"4bf25963d2cd39801b243d4773e8508dcb28686cd0c45d63749828e61735a1c3"} Jan 30 14:00:06 crc kubenswrapper[4793]: I0130 14:00:06.198940 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:06 crc kubenswrapper[4793]: I0130 14:00:06.406339 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" path="/var/lib/kubelet/pods/8c59ec83-7715-4a59-a31b-b433cc9d77a7/volumes" Jan 30 14:00:09 crc kubenswrapper[4793]: I0130 14:00:09.218324 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dh9db" event={"ID":"e635e428-77d8-44fb-baa4-1af4bd603c10","Type":"ContainerStarted","Data":"1377f28a7f0b4a414b4b9738eef54a994c785687bcde1f5466f1e45c6e5cbb3f"} Jan 30 14:00:09 crc kubenswrapper[4793]: I0130 14:00:09.218856 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:09 crc kubenswrapper[4793]: I0130 14:00:09.230800 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" podStartSLOduration=5.642250926 podStartE2EDuration="9.23078338s" podCreationTimestamp="2026-01-30 14:00:00 +0000 UTC" firstStartedPulling="2026-01-30 14:00:01.876024085 +0000 UTC m=+1012.577372576" lastFinishedPulling="2026-01-30 14:00:05.464556539 +0000 UTC m=+1016.165905030" observedRunningTime="2026-01-30 14:00:06.220002239 +0000 UTC m=+1016.921350750" watchObservedRunningTime="2026-01-30 14:00:09.23078338 +0000 UTC m=+1019.932131871" Jan 30 14:00:10 crc kubenswrapper[4793]: I0130 14:00:10.418300 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-dh9db" podStartSLOduration=3.083622952 podStartE2EDuration="10.418279082s" podCreationTimestamp="2026-01-30 14:00:00 +0000 UTC" firstStartedPulling="2026-01-30 14:00:01.222658239 +0000 UTC m=+1011.924006730" lastFinishedPulling="2026-01-30 14:00:08.557314339 +0000 UTC m=+1019.258662860" observedRunningTime="2026-01-30 14:00:09.231664742 +0000 UTC m=+1019.933013253" watchObservedRunningTime="2026-01-30 14:00:10.418279082 +0000 UTC m=+1021.119627573" Jan 30 14:00:11 crc kubenswrapper[4793]: I0130 14:00:11.465425 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:11 crc kubenswrapper[4793]: I0130 14:00:11.465775 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:11 crc kubenswrapper[4793]: I0130 14:00:11.470156 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:12 crc kubenswrapper[4793]: I0130 14:00:12.241277 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5767d7b4df-v5z9l" Jan 30 14:00:12 crc kubenswrapper[4793]: I0130 14:00:12.307171 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kknzc"] Jan 30 14:00:12 crc kubenswrapper[4793]: I0130 14:00:12.413501 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:00:12 crc kubenswrapper[4793]: I0130 14:00:12.413563 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:00:12 crc kubenswrapper[4793]: I0130 14:00:12.413608 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:00:12 crc kubenswrapper[4793]: I0130 14:00:12.414208 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a70290c8d43e76215d2545599390db044bcef74601c3ab38a37df4fc1393ebad"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:00:12 crc kubenswrapper[4793]: I0130 14:00:12.414274 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://a70290c8d43e76215d2545599390db044bcef74601c3ab38a37df4fc1393ebad" gracePeriod=600 Jan 30 14:00:13 crc kubenswrapper[4793]: I0130 14:00:13.245365 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="a70290c8d43e76215d2545599390db044bcef74601c3ab38a37df4fc1393ebad" exitCode=0 Jan 30 14:00:13 crc kubenswrapper[4793]: I0130 14:00:13.245405 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"a70290c8d43e76215d2545599390db044bcef74601c3ab38a37df4fc1393ebad"} Jan 30 14:00:13 crc kubenswrapper[4793]: I0130 14:00:13.245863 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"2d2487d42ac1676516749d1fe7d34e7f815543009b077aded1798d3fcce33e28"} Jan 30 14:00:13 crc kubenswrapper[4793]: I0130 14:00:13.245888 4793 scope.go:117] "RemoveContainer" containerID="b9cf45bf1a50275470b74653bea158e128b7fd786c16cf7d32b21f4133fd1baa" Jan 30 14:00:15 crc kubenswrapper[4793]: I0130 14:00:15.265321 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" event={"ID":"1a7bdce5-b625-40ce-b674-a834fcd178a8","Type":"ContainerStarted","Data":"e30e718785f12382656876fa7585be638cfe0dd79889855f5a994ced8033d38d"} Jan 30 14:00:16 crc kubenswrapper[4793]: I0130 14:00:16.218186 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-dh9db" Jan 30 14:00:19 crc kubenswrapper[4793]: I0130 14:00:19.452502 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" event={"ID":"1a7bdce5-b625-40ce-b674-a834fcd178a8","Type":"ContainerStarted","Data":"058b6d62cbb40fce810098a2d0261de1aba5023da85e8fa2a79824ddb5096f7f"} Jan 30 14:00:19 crc kubenswrapper[4793]: I0130 14:00:19.454469 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" event={"ID":"5df01042-63fe-458a-b71d-d1f9bdf9ea66","Type":"ContainerStarted","Data":"82d3f200b8bf09e3e0c6fa5be1702a767313348d3da5aac8f66bcd610f5a6bfa"} Jan 30 14:00:20 crc kubenswrapper[4793]: I0130 14:00:20.474614 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kc5ft" podStartSLOduration=2.854421801 podStartE2EDuration="20.474598447s" podCreationTimestamp="2026-01-30 14:00:00 +0000 UTC" firstStartedPulling="2026-01-30 14:00:01.491445537 +0000 UTC m=+1012.192794028" lastFinishedPulling="2026-01-30 14:00:19.111622173 +0000 UTC m=+1029.812970674" observedRunningTime="2026-01-30 14:00:20.472384223 +0000 UTC m=+1031.173732734" watchObservedRunningTime="2026-01-30 14:00:20.474598447 +0000 UTC m=+1031.175946938" Jan 30 14:00:20 crc kubenswrapper[4793]: I0130 14:00:20.504334 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-2gwr6" podStartSLOduration=2.910273061 podStartE2EDuration="20.504315696s" podCreationTimestamp="2026-01-30 14:00:00 +0000 UTC" firstStartedPulling="2026-01-30 14:00:01.534665217 +0000 UTC m=+1012.236013708" lastFinishedPulling="2026-01-30 14:00:19.128707842 +0000 UTC m=+1029.830056343" observedRunningTime="2026-01-30 14:00:20.503638739 +0000 UTC m=+1031.204987260" watchObservedRunningTime="2026-01-30 14:00:20.504315696 +0000 UTC m=+1031.205664187" Jan 30 14:00:21 crc kubenswrapper[4793]: I0130 14:00:21.480008 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hw489" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.573372 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j5rsz"] Jan 30 14:00:27 crc kubenswrapper[4793]: E0130 14:00:27.574306 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="extract-utilities" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.574328 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="extract-utilities" Jan 30 14:00:27 crc kubenswrapper[4793]: E0130 14:00:27.574346 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="extract-content" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.574358 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="extract-content" Jan 30 14:00:27 crc kubenswrapper[4793]: E0130 14:00:27.574378 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0262a970-62b2-47c1-93bf-1e4455a999bf" containerName="collect-profiles" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.574388 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0262a970-62b2-47c1-93bf-1e4455a999bf" containerName="collect-profiles" Jan 30 14:00:27 crc kubenswrapper[4793]: E0130 14:00:27.574411 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="registry-server" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.574420 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="registry-server" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.574575 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0262a970-62b2-47c1-93bf-1e4455a999bf" containerName="collect-profiles" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.574601 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c59ec83-7715-4a59-a31b-b433cc9d77a7" containerName="registry-server" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.575838 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.591704 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5rsz"] Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.617950 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-utilities\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.618095 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-catalog-content\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.618126 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mfbn\" (UniqueName: \"kubernetes.io/projected/94f70350-2f2a-41aa-900d-d42d13231186-kube-api-access-9mfbn\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.719430 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-catalog-content\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.719486 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mfbn\" (UniqueName: \"kubernetes.io/projected/94f70350-2f2a-41aa-900d-d42d13231186-kube-api-access-9mfbn\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.719519 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-utilities\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.719977 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-catalog-content\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.720092 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-utilities\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.741759 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mfbn\" (UniqueName: \"kubernetes.io/projected/94f70350-2f2a-41aa-900d-d42d13231186-kube-api-access-9mfbn\") pod \"redhat-marketplace-j5rsz\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:27 crc kubenswrapper[4793]: I0130 14:00:27.905691 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.380278 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5rsz"] Jan 30 14:00:28 crc kubenswrapper[4793]: W0130 14:00:28.396749 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94f70350_2f2a_41aa_900d_d42d13231186.slice/crio-07c6594f1106c2b711671cdfc1e7a231287d4f651dfde3fcb5e7d7f515ba7462 WatchSource:0}: Error finding container 07c6594f1106c2b711671cdfc1e7a231287d4f651dfde3fcb5e7d7f515ba7462: Status 404 returned error can't find the container with id 07c6594f1106c2b711671cdfc1e7a231287d4f651dfde3fcb5e7d7f515ba7462 Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.523155 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5rsz" event={"ID":"94f70350-2f2a-41aa-900d-d42d13231186","Type":"ContainerStarted","Data":"07c6594f1106c2b711671cdfc1e7a231287d4f651dfde3fcb5e7d7f515ba7462"} Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.747231 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jsbqs"] Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.748254 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.776365 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jsbqs"] Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.863243 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-catalog-content\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.863308 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9rc7\" (UniqueName: \"kubernetes.io/projected/31ef0a7f-aa60-4b86-b113-da5bc0614016-kube-api-access-k9rc7\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.863376 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-utilities\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.967120 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-catalog-content\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.967511 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9rc7\" (UniqueName: \"kubernetes.io/projected/31ef0a7f-aa60-4b86-b113-da5bc0614016-kube-api-access-k9rc7\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.967573 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-utilities\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.967743 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-catalog-content\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:28 crc kubenswrapper[4793]: I0130 14:00:28.968173 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-utilities\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:29 crc kubenswrapper[4793]: I0130 14:00:28.991612 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9rc7\" (UniqueName: \"kubernetes.io/projected/31ef0a7f-aa60-4b86-b113-da5bc0614016-kube-api-access-k9rc7\") pod \"certified-operators-jsbqs\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:29 crc kubenswrapper[4793]: I0130 14:00:29.105506 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:29 crc kubenswrapper[4793]: I0130 14:00:29.541437 4793 generic.go:334] "Generic (PLEG): container finished" podID="94f70350-2f2a-41aa-900d-d42d13231186" containerID="6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22" exitCode=0 Jan 30 14:00:29 crc kubenswrapper[4793]: I0130 14:00:29.542035 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5rsz" event={"ID":"94f70350-2f2a-41aa-900d-d42d13231186","Type":"ContainerDied","Data":"6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22"} Jan 30 14:00:29 crc kubenswrapper[4793]: I0130 14:00:29.691321 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jsbqs"] Jan 30 14:00:30 crc kubenswrapper[4793]: I0130 14:00:30.550240 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5rsz" event={"ID":"94f70350-2f2a-41aa-900d-d42d13231186","Type":"ContainerStarted","Data":"dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03"} Jan 30 14:00:30 crc kubenswrapper[4793]: I0130 14:00:30.553888 4793 generic.go:334] "Generic (PLEG): container finished" podID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerID="e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed" exitCode=0 Jan 30 14:00:30 crc kubenswrapper[4793]: I0130 14:00:30.553931 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsbqs" event={"ID":"31ef0a7f-aa60-4b86-b113-da5bc0614016","Type":"ContainerDied","Data":"e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed"} Jan 30 14:00:30 crc kubenswrapper[4793]: I0130 14:00:30.553976 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsbqs" event={"ID":"31ef0a7f-aa60-4b86-b113-da5bc0614016","Type":"ContainerStarted","Data":"398390322b79ae3539c03801cd1c80713e78c256487b16c885394a72c17c0058"} Jan 30 14:00:31 crc kubenswrapper[4793]: I0130 14:00:31.563334 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsbqs" event={"ID":"31ef0a7f-aa60-4b86-b113-da5bc0614016","Type":"ContainerStarted","Data":"461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a"} Jan 30 14:00:31 crc kubenswrapper[4793]: I0130 14:00:31.566262 4793 generic.go:334] "Generic (PLEG): container finished" podID="94f70350-2f2a-41aa-900d-d42d13231186" containerID="dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03" exitCode=0 Jan 30 14:00:31 crc kubenswrapper[4793]: I0130 14:00:31.566288 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5rsz" event={"ID":"94f70350-2f2a-41aa-900d-d42d13231186","Type":"ContainerDied","Data":"dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03"} Jan 30 14:00:32 crc kubenswrapper[4793]: I0130 14:00:32.588618 4793 generic.go:334] "Generic (PLEG): container finished" podID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerID="461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a" exitCode=0 Jan 30 14:00:32 crc kubenswrapper[4793]: I0130 14:00:32.588716 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsbqs" event={"ID":"31ef0a7f-aa60-4b86-b113-da5bc0614016","Type":"ContainerDied","Data":"461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a"} Jan 30 14:00:32 crc kubenswrapper[4793]: I0130 14:00:32.599713 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5rsz" event={"ID":"94f70350-2f2a-41aa-900d-d42d13231186","Type":"ContainerStarted","Data":"6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f"} Jan 30 14:00:32 crc kubenswrapper[4793]: I0130 14:00:32.638969 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j5rsz" podStartSLOduration=3.108877059 podStartE2EDuration="5.638950609s" podCreationTimestamp="2026-01-30 14:00:27 +0000 UTC" firstStartedPulling="2026-01-30 14:00:29.545490167 +0000 UTC m=+1040.246838658" lastFinishedPulling="2026-01-30 14:00:32.075563717 +0000 UTC m=+1042.776912208" observedRunningTime="2026-01-30 14:00:32.637904523 +0000 UTC m=+1043.339253044" watchObservedRunningTime="2026-01-30 14:00:32.638950609 +0000 UTC m=+1043.340299100" Jan 30 14:00:33 crc kubenswrapper[4793]: I0130 14:00:33.607183 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsbqs" event={"ID":"31ef0a7f-aa60-4b86-b113-da5bc0614016","Type":"ContainerStarted","Data":"1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092"} Jan 30 14:00:33 crc kubenswrapper[4793]: I0130 14:00:33.630277 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jsbqs" podStartSLOduration=3.011142405 podStartE2EDuration="5.630262682s" podCreationTimestamp="2026-01-30 14:00:28 +0000 UTC" firstStartedPulling="2026-01-30 14:00:30.562537688 +0000 UTC m=+1041.263886179" lastFinishedPulling="2026-01-30 14:00:33.181657965 +0000 UTC m=+1043.883006456" observedRunningTime="2026-01-30 14:00:33.628190091 +0000 UTC m=+1044.329538592" watchObservedRunningTime="2026-01-30 14:00:33.630262682 +0000 UTC m=+1044.331611173" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.787833 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29"] Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.789918 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.792207 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.801258 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29"] Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.868486 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.868554 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.868636 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snb7m\" (UniqueName: \"kubernetes.io/projected/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-kube-api-access-snb7m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.969443 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snb7m\" (UniqueName: \"kubernetes.io/projected/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-kube-api-access-snb7m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.969500 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.969535 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.969933 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.969988 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:35 crc kubenswrapper[4793]: I0130 14:00:35.995337 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snb7m\" (UniqueName: \"kubernetes.io/projected/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-kube-api-access-snb7m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:36 crc kubenswrapper[4793]: I0130 14:00:36.121272 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:36 crc kubenswrapper[4793]: I0130 14:00:36.332645 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29"] Jan 30 14:00:36 crc kubenswrapper[4793]: W0130 14:00:36.337293 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bd35260_c3c5_4f56_b2ba_d47ca60144d8.slice/crio-eb50d2cd1d053f969b6a001bb5877c8b3fca79207a0ecc325147f2f1e2e298a2 WatchSource:0}: Error finding container eb50d2cd1d053f969b6a001bb5877c8b3fca79207a0ecc325147f2f1e2e298a2: Status 404 returned error can't find the container with id eb50d2cd1d053f969b6a001bb5877c8b3fca79207a0ecc325147f2f1e2e298a2 Jan 30 14:00:36 crc kubenswrapper[4793]: I0130 14:00:36.627809 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" event={"ID":"7bd35260-c3c5-4f56-b2ba-d47ca60144d8","Type":"ContainerStarted","Data":"878d99d7959e602dd8cc87e89ddc1c7c2bb3b8f3a1159a3fc592f63dc34a5c3a"} Jan 30 14:00:36 crc kubenswrapper[4793]: I0130 14:00:36.627862 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" event={"ID":"7bd35260-c3c5-4f56-b2ba-d47ca60144d8","Type":"ContainerStarted","Data":"eb50d2cd1d053f969b6a001bb5877c8b3fca79207a0ecc325147f2f1e2e298a2"} Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.352892 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-kknzc" podUID="69c74b2a-9812-42cf-90b7-b431e2b5f5cf" containerName="console" containerID="cri-o://b72e6d29d1b411597eb5d49883f3b670ed4875b2819be1937cc8b9bc5e0bb53d" gracePeriod=15 Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.634456 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kknzc_69c74b2a-9812-42cf-90b7-b431e2b5f5cf/console/0.log" Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.634654 4793 generic.go:334] "Generic (PLEG): container finished" podID="69c74b2a-9812-42cf-90b7-b431e2b5f5cf" containerID="b72e6d29d1b411597eb5d49883f3b670ed4875b2819be1937cc8b9bc5e0bb53d" exitCode=2 Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.634749 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kknzc" event={"ID":"69c74b2a-9812-42cf-90b7-b431e2b5f5cf","Type":"ContainerDied","Data":"b72e6d29d1b411597eb5d49883f3b670ed4875b2819be1937cc8b9bc5e0bb53d"} Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.635972 4793 generic.go:334] "Generic (PLEG): container finished" podID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerID="878d99d7959e602dd8cc87e89ddc1c7c2bb3b8f3a1159a3fc592f63dc34a5c3a" exitCode=0 Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.636077 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" event={"ID":"7bd35260-c3c5-4f56-b2ba-d47ca60144d8","Type":"ContainerDied","Data":"878d99d7959e602dd8cc87e89ddc1c7c2bb3b8f3a1159a3fc592f63dc34a5c3a"} Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.905977 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.906069 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:37 crc kubenswrapper[4793]: I0130 14:00:37.943898 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.560676 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kknzc_69c74b2a-9812-42cf-90b7-b431e2b5f5cf/console/0.log" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.561060 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kknzc" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.608397 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-trusted-ca-bundle\") pod \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.609473 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "69c74b2a-9812-42cf-90b7-b431e2b5f5cf" (UID: "69c74b2a-9812-42cf-90b7-b431e2b5f5cf"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.609550 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-oauth-serving-cert\") pod \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.610086 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-oauth-config\") pod \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.610873 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "69c74b2a-9812-42cf-90b7-b431e2b5f5cf" (UID: "69c74b2a-9812-42cf-90b7-b431e2b5f5cf"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.611021 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-config\") pod \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.611678 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w2cd\" (UniqueName: \"kubernetes.io/projected/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-kube-api-access-4w2cd\") pod \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.611736 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-serving-cert\") pod \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.611765 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-service-ca\") pod \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\" (UID: \"69c74b2a-9812-42cf-90b7-b431e2b5f5cf\") " Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.611610 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-config" (OuterVolumeSpecName: "console-config") pod "69c74b2a-9812-42cf-90b7-b431e2b5f5cf" (UID: "69c74b2a-9812-42cf-90b7-b431e2b5f5cf"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.612414 4793 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.612432 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.612443 4793 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.612857 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-service-ca" (OuterVolumeSpecName: "service-ca") pod "69c74b2a-9812-42cf-90b7-b431e2b5f5cf" (UID: "69c74b2a-9812-42cf-90b7-b431e2b5f5cf"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.616028 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "69c74b2a-9812-42cf-90b7-b431e2b5f5cf" (UID: "69c74b2a-9812-42cf-90b7-b431e2b5f5cf"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.616912 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "69c74b2a-9812-42cf-90b7-b431e2b5f5cf" (UID: "69c74b2a-9812-42cf-90b7-b431e2b5f5cf"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.623628 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-kube-api-access-4w2cd" (OuterVolumeSpecName: "kube-api-access-4w2cd") pod "69c74b2a-9812-42cf-90b7-b431e2b5f5cf" (UID: "69c74b2a-9812-42cf-90b7-b431e2b5f5cf"). InnerVolumeSpecName "kube-api-access-4w2cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.652079 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kknzc_69c74b2a-9812-42cf-90b7-b431e2b5f5cf/console/0.log" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.652386 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kknzc" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.652377 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kknzc" event={"ID":"69c74b2a-9812-42cf-90b7-b431e2b5f5cf","Type":"ContainerDied","Data":"333d1fe50b85de201d8359b376659ea922dde6cd7dc921f7d1df2397e061732e"} Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.652655 4793 scope.go:117] "RemoveContainer" containerID="b72e6d29d1b411597eb5d49883f3b670ed4875b2819be1937cc8b9bc5e0bb53d" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.695118 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kknzc"] Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.697592 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.702876 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-kknzc"] Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.713204 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.713239 4793 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.713251 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w2cd\" (UniqueName: \"kubernetes.io/projected/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-kube-api-access-4w2cd\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:38 crc kubenswrapper[4793]: I0130 14:00:38.713262 4793 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69c74b2a-9812-42cf-90b7-b431e2b5f5cf-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:39 crc kubenswrapper[4793]: I0130 14:00:39.105719 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:39 crc kubenswrapper[4793]: I0130 14:00:39.106080 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:39 crc kubenswrapper[4793]: I0130 14:00:39.163741 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:39 crc kubenswrapper[4793]: I0130 14:00:39.705416 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:40 crc kubenswrapper[4793]: I0130 14:00:40.406930 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69c74b2a-9812-42cf-90b7-b431e2b5f5cf" path="/var/lib/kubelet/pods/69c74b2a-9812-42cf-90b7-b431e2b5f5cf/volumes" Jan 30 14:00:40 crc kubenswrapper[4793]: I0130 14:00:40.668630 4793 generic.go:334] "Generic (PLEG): container finished" podID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerID="4f58c8c0c09b669a69ae5be230231a2d273759024ad947b4a71132c84b7c0ae0" exitCode=0 Jan 30 14:00:40 crc kubenswrapper[4793]: I0130 14:00:40.668720 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" event={"ID":"7bd35260-c3c5-4f56-b2ba-d47ca60144d8","Type":"ContainerDied","Data":"4f58c8c0c09b669a69ae5be230231a2d273759024ad947b4a71132c84b7c0ae0"} Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.129861 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5rsz"] Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.130363 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j5rsz" podUID="94f70350-2f2a-41aa-900d-d42d13231186" containerName="registry-server" containerID="cri-o://6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f" gracePeriod=2 Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.511078 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.556113 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-utilities\") pod \"94f70350-2f2a-41aa-900d-d42d13231186\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.556155 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-catalog-content\") pod \"94f70350-2f2a-41aa-900d-d42d13231186\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.556214 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mfbn\" (UniqueName: \"kubernetes.io/projected/94f70350-2f2a-41aa-900d-d42d13231186-kube-api-access-9mfbn\") pod \"94f70350-2f2a-41aa-900d-d42d13231186\" (UID: \"94f70350-2f2a-41aa-900d-d42d13231186\") " Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.557384 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-utilities" (OuterVolumeSpecName: "utilities") pod "94f70350-2f2a-41aa-900d-d42d13231186" (UID: "94f70350-2f2a-41aa-900d-d42d13231186"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.561541 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94f70350-2f2a-41aa-900d-d42d13231186-kube-api-access-9mfbn" (OuterVolumeSpecName: "kube-api-access-9mfbn") pod "94f70350-2f2a-41aa-900d-d42d13231186" (UID: "94f70350-2f2a-41aa-900d-d42d13231186"). InnerVolumeSpecName "kube-api-access-9mfbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.657506 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.657537 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mfbn\" (UniqueName: \"kubernetes.io/projected/94f70350-2f2a-41aa-900d-d42d13231186-kube-api-access-9mfbn\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.677710 4793 generic.go:334] "Generic (PLEG): container finished" podID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerID="a19feb6d08a072aa80c9c8b9c5323dbdc049c25d5690e9bb77d8a86b28541886" exitCode=0 Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.677795 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" event={"ID":"7bd35260-c3c5-4f56-b2ba-d47ca60144d8","Type":"ContainerDied","Data":"a19feb6d08a072aa80c9c8b9c5323dbdc049c25d5690e9bb77d8a86b28541886"} Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.682256 4793 generic.go:334] "Generic (PLEG): container finished" podID="94f70350-2f2a-41aa-900d-d42d13231186" containerID="6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f" exitCode=0 Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.682303 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5rsz" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.682302 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5rsz" event={"ID":"94f70350-2f2a-41aa-900d-d42d13231186","Type":"ContainerDied","Data":"6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f"} Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.682360 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5rsz" event={"ID":"94f70350-2f2a-41aa-900d-d42d13231186","Type":"ContainerDied","Data":"07c6594f1106c2b711671cdfc1e7a231287d4f651dfde3fcb5e7d7f515ba7462"} Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.682383 4793 scope.go:117] "RemoveContainer" containerID="6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.701683 4793 scope.go:117] "RemoveContainer" containerID="dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.716860 4793 scope.go:117] "RemoveContainer" containerID="6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.740311 4793 scope.go:117] "RemoveContainer" containerID="6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f" Jan 30 14:00:41 crc kubenswrapper[4793]: E0130 14:00:41.740671 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f\": container with ID starting with 6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f not found: ID does not exist" containerID="6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.740707 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f"} err="failed to get container status \"6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f\": rpc error: code = NotFound desc = could not find container \"6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f\": container with ID starting with 6e23cc0d1022008b9c22764e9d56e8207d7c7fa321dc44eba8bf76fcbfd8d00f not found: ID does not exist" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.740727 4793 scope.go:117] "RemoveContainer" containerID="dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03" Jan 30 14:00:41 crc kubenswrapper[4793]: E0130 14:00:41.741479 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03\": container with ID starting with dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03 not found: ID does not exist" containerID="dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.741502 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03"} err="failed to get container status \"dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03\": rpc error: code = NotFound desc = could not find container \"dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03\": container with ID starting with dfb1e2085461eb0f95f4290294c29a367c89630759d7348b8aeaf999f6ddee03 not found: ID does not exist" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.741516 4793 scope.go:117] "RemoveContainer" containerID="6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22" Jan 30 14:00:41 crc kubenswrapper[4793]: E0130 14:00:41.742091 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22\": container with ID starting with 6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22 not found: ID does not exist" containerID="6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22" Jan 30 14:00:41 crc kubenswrapper[4793]: I0130 14:00:41.742119 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22"} err="failed to get container status \"6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22\": rpc error: code = NotFound desc = could not find container \"6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22\": container with ID starting with 6a29d94b5ced2df212fa9d7d6151bf60aeab20f74491c678c3f7f6b3d56cbe22 not found: ID does not exist" Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.348557 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94f70350-2f2a-41aa-900d-d42d13231186" (UID: "94f70350-2f2a-41aa-900d-d42d13231186"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.385534 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f70350-2f2a-41aa-900d-d42d13231186-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.599494 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5rsz"] Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.603585 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5rsz"] Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.730990 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jsbqs"] Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.731298 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jsbqs" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerName="registry-server" containerID="cri-o://1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092" gracePeriod=2 Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.932326 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.993562 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-bundle\") pod \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.993610 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snb7m\" (UniqueName: \"kubernetes.io/projected/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-kube-api-access-snb7m\") pod \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.993654 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-util\") pod \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\" (UID: \"7bd35260-c3c5-4f56-b2ba-d47ca60144d8\") " Jan 30 14:00:42 crc kubenswrapper[4793]: I0130 14:00:42.995154 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-bundle" (OuterVolumeSpecName: "bundle") pod "7bd35260-c3c5-4f56-b2ba-d47ca60144d8" (UID: "7bd35260-c3c5-4f56-b2ba-d47ca60144d8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:43 crc kubenswrapper[4793]: I0130 14:00:43.002245 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-kube-api-access-snb7m" (OuterVolumeSpecName: "kube-api-access-snb7m") pod "7bd35260-c3c5-4f56-b2ba-d47ca60144d8" (UID: "7bd35260-c3c5-4f56-b2ba-d47ca60144d8"). InnerVolumeSpecName "kube-api-access-snb7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:00:43 crc kubenswrapper[4793]: I0130 14:00:43.008396 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-util" (OuterVolumeSpecName: "util") pod "7bd35260-c3c5-4f56-b2ba-d47ca60144d8" (UID: "7bd35260-c3c5-4f56-b2ba-d47ca60144d8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:43 crc kubenswrapper[4793]: I0130 14:00:43.095310 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:43 crc kubenswrapper[4793]: I0130 14:00:43.095337 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snb7m\" (UniqueName: \"kubernetes.io/projected/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-kube-api-access-snb7m\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:43 crc kubenswrapper[4793]: I0130 14:00:43.095348 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7bd35260-c3c5-4f56-b2ba-d47ca60144d8-util\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:43 crc kubenswrapper[4793]: I0130 14:00:43.708620 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" event={"ID":"7bd35260-c3c5-4f56-b2ba-d47ca60144d8","Type":"ContainerDied","Data":"eb50d2cd1d053f969b6a001bb5877c8b3fca79207a0ecc325147f2f1e2e298a2"} Jan 30 14:00:43 crc kubenswrapper[4793]: I0130 14:00:43.708963 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb50d2cd1d053f969b6a001bb5877c8b3fca79207a0ecc325147f2f1e2e298a2" Jan 30 14:00:43 crc kubenswrapper[4793]: I0130 14:00:43.708747 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.406709 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94f70350-2f2a-41aa-900d-d42d13231186" path="/var/lib/kubelet/pods/94f70350-2f2a-41aa-900d-d42d13231186/volumes" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.532408 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.615314 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-utilities\") pod \"31ef0a7f-aa60-4b86-b113-da5bc0614016\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.615382 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-catalog-content\") pod \"31ef0a7f-aa60-4b86-b113-da5bc0614016\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.615434 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9rc7\" (UniqueName: \"kubernetes.io/projected/31ef0a7f-aa60-4b86-b113-da5bc0614016-kube-api-access-k9rc7\") pod \"31ef0a7f-aa60-4b86-b113-da5bc0614016\" (UID: \"31ef0a7f-aa60-4b86-b113-da5bc0614016\") " Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.617103 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-utilities" (OuterVolumeSpecName: "utilities") pod "31ef0a7f-aa60-4b86-b113-da5bc0614016" (UID: "31ef0a7f-aa60-4b86-b113-da5bc0614016"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.621389 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ef0a7f-aa60-4b86-b113-da5bc0614016-kube-api-access-k9rc7" (OuterVolumeSpecName: "kube-api-access-k9rc7") pod "31ef0a7f-aa60-4b86-b113-da5bc0614016" (UID: "31ef0a7f-aa60-4b86-b113-da5bc0614016"). InnerVolumeSpecName "kube-api-access-k9rc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.675983 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31ef0a7f-aa60-4b86-b113-da5bc0614016" (UID: "31ef0a7f-aa60-4b86-b113-da5bc0614016"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.716379 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.716406 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef0a7f-aa60-4b86-b113-da5bc0614016-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.716422 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9rc7\" (UniqueName: \"kubernetes.io/projected/31ef0a7f-aa60-4b86-b113-da5bc0614016-kube-api-access-k9rc7\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.719304 4793 generic.go:334] "Generic (PLEG): container finished" podID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerID="1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092" exitCode=0 Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.719355 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsbqs" event={"ID":"31ef0a7f-aa60-4b86-b113-da5bc0614016","Type":"ContainerDied","Data":"1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092"} Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.719392 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsbqs" event={"ID":"31ef0a7f-aa60-4b86-b113-da5bc0614016","Type":"ContainerDied","Data":"398390322b79ae3539c03801cd1c80713e78c256487b16c885394a72c17c0058"} Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.719419 4793 scope.go:117] "RemoveContainer" containerID="1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.719593 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsbqs" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.759957 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jsbqs"] Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.761476 4793 scope.go:117] "RemoveContainer" containerID="461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.763862 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jsbqs"] Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.779313 4793 scope.go:117] "RemoveContainer" containerID="e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.793169 4793 scope.go:117] "RemoveContainer" containerID="1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092" Jan 30 14:00:44 crc kubenswrapper[4793]: E0130 14:00:44.793523 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092\": container with ID starting with 1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092 not found: ID does not exist" containerID="1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.793562 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092"} err="failed to get container status \"1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092\": rpc error: code = NotFound desc = could not find container \"1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092\": container with ID starting with 1206a18e32b881e12926b4be6f0f543ebd99be1e9dd87b8b54fbd74a05a74092 not found: ID does not exist" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.793590 4793 scope.go:117] "RemoveContainer" containerID="461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a" Jan 30 14:00:44 crc kubenswrapper[4793]: E0130 14:00:44.793867 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a\": container with ID starting with 461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a not found: ID does not exist" containerID="461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.793929 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a"} err="failed to get container status \"461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a\": rpc error: code = NotFound desc = could not find container \"461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a\": container with ID starting with 461945159e8785aa74633018b990799ed5e7fbf0ef4079aaa58250d79c0caf4a not found: ID does not exist" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.793963 4793 scope.go:117] "RemoveContainer" containerID="e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed" Jan 30 14:00:44 crc kubenswrapper[4793]: E0130 14:00:44.794337 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed\": container with ID starting with e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed not found: ID does not exist" containerID="e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed" Jan 30 14:00:44 crc kubenswrapper[4793]: I0130 14:00:44.794368 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed"} err="failed to get container status \"e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed\": rpc error: code = NotFound desc = could not find container \"e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed\": container with ID starting with e0a492b0c692162e81233f2e36ef822446c24d9a919960dc7ff9a0bc1b6e3eed not found: ID does not exist" Jan 30 14:00:46 crc kubenswrapper[4793]: I0130 14:00:46.405217 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" path="/var/lib/kubelet/pods/31ef0a7f-aa60-4b86-b113-da5bc0614016/volumes" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941311 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw"] Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941762 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerName="extract-content" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941774 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerName="extract-content" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941791 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f70350-2f2a-41aa-900d-d42d13231186" containerName="extract-content" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941798 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f70350-2f2a-41aa-900d-d42d13231186" containerName="extract-content" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941804 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerName="extract" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941810 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerName="extract" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941820 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerName="extract-utilities" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941826 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerName="extract-utilities" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941834 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerName="util" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941841 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerName="util" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941848 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f70350-2f2a-41aa-900d-d42d13231186" containerName="extract-utilities" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941855 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f70350-2f2a-41aa-900d-d42d13231186" containerName="extract-utilities" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941864 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f70350-2f2a-41aa-900d-d42d13231186" containerName="registry-server" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941870 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f70350-2f2a-41aa-900d-d42d13231186" containerName="registry-server" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941877 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerName="registry-server" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941883 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerName="registry-server" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941891 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerName="pull" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941896 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerName="pull" Jan 30 14:00:51 crc kubenswrapper[4793]: E0130 14:00:51.941905 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c74b2a-9812-42cf-90b7-b431e2b5f5cf" containerName="console" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.941910 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c74b2a-9812-42cf-90b7-b431e2b5f5cf" containerName="console" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.942025 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ef0a7f-aa60-4b86-b113-da5bc0614016" containerName="registry-server" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.942033 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd35260-c3c5-4f56-b2ba-d47ca60144d8" containerName="extract" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.942064 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c74b2a-9812-42cf-90b7-b431e2b5f5cf" containerName="console" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.942078 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="94f70350-2f2a-41aa-900d-d42d13231186" containerName="registry-server" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.942433 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.944607 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.944932 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.945170 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.945830 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.948294 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-9xc56" Jan 30 14:00:51 crc kubenswrapper[4793]: I0130 14:00:51.963768 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw"] Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.022978 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/75266e51-59ee-432d-b56a-ba972e5ff25b-apiservice-cert\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.023077 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv2xm\" (UniqueName: \"kubernetes.io/projected/75266e51-59ee-432d-b56a-ba972e5ff25b-kube-api-access-mv2xm\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.023246 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/75266e51-59ee-432d-b56a-ba972e5ff25b-webhook-cert\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.124169 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/75266e51-59ee-432d-b56a-ba972e5ff25b-webhook-cert\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.124260 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/75266e51-59ee-432d-b56a-ba972e5ff25b-apiservice-cert\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.124314 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv2xm\" (UniqueName: \"kubernetes.io/projected/75266e51-59ee-432d-b56a-ba972e5ff25b-kube-api-access-mv2xm\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.133594 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/75266e51-59ee-432d-b56a-ba972e5ff25b-apiservice-cert\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.144724 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv2xm\" (UniqueName: \"kubernetes.io/projected/75266e51-59ee-432d-b56a-ba972e5ff25b-kube-api-access-mv2xm\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.145658 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/75266e51-59ee-432d-b56a-ba972e5ff25b-webhook-cert\") pod \"metallb-operator-controller-manager-7fbd4d697c-ndglw\" (UID: \"75266e51-59ee-432d-b56a-ba972e5ff25b\") " pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.259896 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.377109 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm"] Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.377907 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.380197 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-s8xbv" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.380479 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.384371 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.464854 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/45949f1b-1075-4d7f-9007-8525e0364a55-webhook-cert\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.465135 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5sqk\" (UniqueName: \"kubernetes.io/projected/45949f1b-1075-4d7f-9007-8525e0364a55-kube-api-access-n5sqk\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.465227 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/45949f1b-1075-4d7f-9007-8525e0364a55-apiservice-cert\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.529137 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm"] Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.567121 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/45949f1b-1075-4d7f-9007-8525e0364a55-webhook-cert\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.567322 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5sqk\" (UniqueName: \"kubernetes.io/projected/45949f1b-1075-4d7f-9007-8525e0364a55-kube-api-access-n5sqk\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.568753 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/45949f1b-1075-4d7f-9007-8525e0364a55-apiservice-cert\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.581772 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/45949f1b-1075-4d7f-9007-8525e0364a55-apiservice-cert\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.618244 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/45949f1b-1075-4d7f-9007-8525e0364a55-webhook-cert\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.623694 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5sqk\" (UniqueName: \"kubernetes.io/projected/45949f1b-1075-4d7f-9007-8525e0364a55-kube-api-access-n5sqk\") pod \"metallb-operator-webhook-server-6446fc49bd-rzbbm\" (UID: \"45949f1b-1075-4d7f-9007-8525e0364a55\") " pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.718730 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:00:52 crc kubenswrapper[4793]: I0130 14:00:52.760210 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw"] Jan 30 14:00:52 crc kubenswrapper[4793]: W0130 14:00:52.770756 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75266e51_59ee_432d_b56a_ba972e5ff25b.slice/crio-08bb7b17d9c9bf73c6942c212867af712ee9590870e3995e442ac62abf727d6a WatchSource:0}: Error finding container 08bb7b17d9c9bf73c6942c212867af712ee9590870e3995e442ac62abf727d6a: Status 404 returned error can't find the container with id 08bb7b17d9c9bf73c6942c212867af712ee9590870e3995e442ac62abf727d6a Jan 30 14:00:53 crc kubenswrapper[4793]: I0130 14:00:53.223416 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm"] Jan 30 14:00:53 crc kubenswrapper[4793]: I0130 14:00:53.771536 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" event={"ID":"45949f1b-1075-4d7f-9007-8525e0364a55","Type":"ContainerStarted","Data":"81d327f9e4d091c903ed44b2db98cb10b84595ae7403eb29a1d2920048220390"} Jan 30 14:00:53 crc kubenswrapper[4793]: I0130 14:00:53.773227 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" event={"ID":"75266e51-59ee-432d-b56a-ba972e5ff25b","Type":"ContainerStarted","Data":"08bb7b17d9c9bf73c6942c212867af712ee9590870e3995e442ac62abf727d6a"} Jan 30 14:01:00 crc kubenswrapper[4793]: I0130 14:01:00.813907 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" event={"ID":"45949f1b-1075-4d7f-9007-8525e0364a55","Type":"ContainerStarted","Data":"dbef2070ced1e914831bc297e4931170b201bc2f7f1e8591044ac25b8271cc4e"} Jan 30 14:01:00 crc kubenswrapper[4793]: I0130 14:01:00.814481 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:01:00 crc kubenswrapper[4793]: I0130 14:01:00.815961 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" event={"ID":"75266e51-59ee-432d-b56a-ba972e5ff25b","Type":"ContainerStarted","Data":"b46c2926e29b4e95f5f5d0040c3d731c6dae55acef58ff1dd29e79cd77ae5414"} Jan 30 14:01:00 crc kubenswrapper[4793]: I0130 14:01:00.816117 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:01:00 crc kubenswrapper[4793]: I0130 14:01:00.854667 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" podStartSLOduration=1.800471706 podStartE2EDuration="8.854653299s" podCreationTimestamp="2026-01-30 14:00:52 +0000 UTC" firstStartedPulling="2026-01-30 14:00:53.233640421 +0000 UTC m=+1063.934988912" lastFinishedPulling="2026-01-30 14:01:00.287822014 +0000 UTC m=+1070.989170505" observedRunningTime="2026-01-30 14:01:00.852743772 +0000 UTC m=+1071.554092263" watchObservedRunningTime="2026-01-30 14:01:00.854653299 +0000 UTC m=+1071.556001790" Jan 30 14:01:00 crc kubenswrapper[4793]: I0130 14:01:00.881562 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" podStartSLOduration=2.388075282 podStartE2EDuration="9.881540274s" podCreationTimestamp="2026-01-30 14:00:51 +0000 UTC" firstStartedPulling="2026-01-30 14:00:52.776831125 +0000 UTC m=+1063.478179616" lastFinishedPulling="2026-01-30 14:01:00.270296107 +0000 UTC m=+1070.971644608" observedRunningTime="2026-01-30 14:01:00.875348292 +0000 UTC m=+1071.576696783" watchObservedRunningTime="2026-01-30 14:01:00.881540274 +0000 UTC m=+1071.582888775" Jan 30 14:01:12 crc kubenswrapper[4793]: I0130 14:01:12.725676 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6446fc49bd-rzbbm" Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.264690 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7fbd4d697c-ndglw" Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.954390 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx"] Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.955221 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.960759 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-vsdkv"] Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.962295 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.962702 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-vfh4l" Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.963420 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.966264 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.977464 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx"] Jan 30 14:01:32 crc kubenswrapper[4793]: I0130 14:01:32.982036 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.066600 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-g9hvr"] Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.067421 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.071975 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.072028 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-wpw4n" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.072127 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.072160 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.081647 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-7nlfd"] Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.083980 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.086326 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.096230 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7nlfd"] Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.115699 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-sockets\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.115764 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-reloader\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.115857 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-startup\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.115897 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.115948 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gxrl\" (UniqueName: \"kubernetes.io/projected/e5a76649-d081-4224-baca-095ca1ffadfd-kube-api-access-5gxrl\") pod \"frr-k8s-webhook-server-7df86c4f6c-4p6gx\" (UID: \"e5a76649-d081-4224-baca-095ca1ffadfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.115977 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5a76649-d081-4224-baca-095ca1ffadfd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4p6gx\" (UID: \"e5a76649-d081-4224-baca-095ca1ffadfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.116006 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-conf\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.116029 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics-certs\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.116064 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vn25\" (UniqueName: \"kubernetes.io/projected/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-kube-api-access-5vn25\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217486 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-startup\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217542 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217573 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-metrics-certs\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217599 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-metrics-certs\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217642 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gxrl\" (UniqueName: \"kubernetes.io/projected/e5a76649-d081-4224-baca-095ca1ffadfd-kube-api-access-5gxrl\") pod \"frr-k8s-webhook-server-7df86c4f6c-4p6gx\" (UID: \"e5a76649-d081-4224-baca-095ca1ffadfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217667 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5a76649-d081-4224-baca-095ca1ffadfd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4p6gx\" (UID: \"e5a76649-d081-4224-baca-095ca1ffadfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217686 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-conf\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217704 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics-certs\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217729 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vn25\" (UniqueName: \"kubernetes.io/projected/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-kube-api-access-5vn25\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217760 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tznfd\" (UniqueName: \"kubernetes.io/projected/519ea47c-0d76-44cb-af34-823c71e508c9-kube-api-access-tznfd\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217787 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqmgd\" (UniqueName: \"kubernetes.io/projected/34253a93-968b-47e2-aa0d-43ddb72f29f5-kube-api-access-nqmgd\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217807 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-sockets\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217831 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-cert\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217856 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-reloader\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217875 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/519ea47c-0d76-44cb-af34-823c71e508c9-metallb-excludel2\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.217914 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.218004 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.218378 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-startup\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.218478 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-sockets\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.218551 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-reloader\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.218553 4793 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.218721 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-frr-conf\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.218744 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics-certs podName:fd03c93b-a2a7-4a2f-9292-29c4e7fe9640 nodeName:}" failed. No retries permitted until 2026-01-30 14:01:33.71872899 +0000 UTC m=+1104.420077491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics-certs") pod "frr-k8s-vsdkv" (UID: "fd03c93b-a2a7-4a2f-9292-29c4e7fe9640") : secret "frr-k8s-certs-secret" not found Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.239751 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5a76649-d081-4224-baca-095ca1ffadfd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4p6gx\" (UID: \"e5a76649-d081-4224-baca-095ca1ffadfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.244542 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vn25\" (UniqueName: \"kubernetes.io/projected/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-kube-api-access-5vn25\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.248176 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gxrl\" (UniqueName: \"kubernetes.io/projected/e5a76649-d081-4224-baca-095ca1ffadfd-kube-api-access-5gxrl\") pod \"frr-k8s-webhook-server-7df86c4f6c-4p6gx\" (UID: \"e5a76649-d081-4224-baca-095ca1ffadfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.271864 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.320089 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-cert\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.320491 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/519ea47c-0d76-44cb-af34-823c71e508c9-metallb-excludel2\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.320546 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.320598 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-metrics-certs\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.320622 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-metrics-certs\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.320698 4793 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.320754 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist podName:519ea47c-0d76-44cb-af34-823c71e508c9 nodeName:}" failed. No retries permitted until 2026-01-30 14:01:33.820737864 +0000 UTC m=+1104.522086345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist") pod "speaker-g9hvr" (UID: "519ea47c-0d76-44cb-af34-823c71e508c9") : secret "metallb-memberlist" not found Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.320867 4793 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.320904 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-metrics-certs podName:34253a93-968b-47e2-aa0d-43ddb72f29f5 nodeName:}" failed. No retries permitted until 2026-01-30 14:01:33.820894348 +0000 UTC m=+1104.522242839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-metrics-certs") pod "controller-6968d8fdc4-7nlfd" (UID: "34253a93-968b-47e2-aa0d-43ddb72f29f5") : secret "controller-certs-secret" not found Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.320943 4793 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.320962 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-metrics-certs podName:519ea47c-0d76-44cb-af34-823c71e508c9 nodeName:}" failed. No retries permitted until 2026-01-30 14:01:33.82095565 +0000 UTC m=+1104.522304141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-metrics-certs") pod "speaker-g9hvr" (UID: "519ea47c-0d76-44cb-af34-823c71e508c9") : secret "speaker-certs-secret" not found Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.320703 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tznfd\" (UniqueName: \"kubernetes.io/projected/519ea47c-0d76-44cb-af34-823c71e508c9-kube-api-access-tznfd\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.320998 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqmgd\" (UniqueName: \"kubernetes.io/projected/34253a93-968b-47e2-aa0d-43ddb72f29f5-kube-api-access-nqmgd\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.321606 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.321608 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/519ea47c-0d76-44cb-af34-823c71e508c9-metallb-excludel2\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.334970 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-cert\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.338688 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tznfd\" (UniqueName: \"kubernetes.io/projected/519ea47c-0d76-44cb-af34-823c71e508c9-kube-api-access-tznfd\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.342033 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqmgd\" (UniqueName: \"kubernetes.io/projected/34253a93-968b-47e2-aa0d-43ddb72f29f5-kube-api-access-nqmgd\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.728509 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics-certs\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.736262 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd03c93b-a2a7-4a2f-9292-29c4e7fe9640-metrics-certs\") pod \"frr-k8s-vsdkv\" (UID: \"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640\") " pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.764557 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx"] Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.831086 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-metrics-certs\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.831172 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-metrics-certs\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.831292 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.831519 4793 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 14:01:33 crc kubenswrapper[4793]: E0130 14:01:33.831619 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist podName:519ea47c-0d76-44cb-af34-823c71e508c9 nodeName:}" failed. No retries permitted until 2026-01-30 14:01:34.831592306 +0000 UTC m=+1105.532940797 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist") pod "speaker-g9hvr" (UID: "519ea47c-0d76-44cb-af34-823c71e508c9") : secret "metallb-memberlist" not found Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.833835 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-metrics-certs\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.833986 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34253a93-968b-47e2-aa0d-43ddb72f29f5-metrics-certs\") pod \"controller-6968d8fdc4-7nlfd\" (UID: \"34253a93-968b-47e2-aa0d-43ddb72f29f5\") " pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.886343 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:33 crc kubenswrapper[4793]: I0130 14:01:33.997753 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:34 crc kubenswrapper[4793]: I0130 14:01:34.009561 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" event={"ID":"e5a76649-d081-4224-baca-095ca1ffadfd","Type":"ContainerStarted","Data":"9b5146874d13f4d31f06aaddacf281561f7d46f6b077b48c51b9f000dcbd0d0e"} Jan 30 14:01:34 crc kubenswrapper[4793]: I0130 14:01:34.010681 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerStarted","Data":"7c95a59c48a92c8b366fdea9ed434d8bf644e5ffdfe2e07fd52e0c27e610d4ef"} Jan 30 14:01:34 crc kubenswrapper[4793]: I0130 14:01:34.391112 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7nlfd"] Jan 30 14:01:34 crc kubenswrapper[4793]: I0130 14:01:34.842860 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:34 crc kubenswrapper[4793]: I0130 14:01:34.852693 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/519ea47c-0d76-44cb-af34-823c71e508c9-memberlist\") pod \"speaker-g9hvr\" (UID: \"519ea47c-0d76-44cb-af34-823c71e508c9\") " pod="metallb-system/speaker-g9hvr" Jan 30 14:01:34 crc kubenswrapper[4793]: I0130 14:01:34.886853 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g9hvr" Jan 30 14:01:35 crc kubenswrapper[4793]: I0130 14:01:35.031300 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7nlfd" event={"ID":"34253a93-968b-47e2-aa0d-43ddb72f29f5","Type":"ContainerStarted","Data":"dfe4279ae2d210bbf8bd9d5d3aa03cafb76b2fdf6ec4618b351487593e95ef25"} Jan 30 14:01:35 crc kubenswrapper[4793]: I0130 14:01:35.031582 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7nlfd" event={"ID":"34253a93-968b-47e2-aa0d-43ddb72f29f5","Type":"ContainerStarted","Data":"ae040937e950a1c01e1aa55941b17be8c87c194c59c5618d30f55e781e060b98"} Jan 30 14:01:35 crc kubenswrapper[4793]: I0130 14:01:35.031670 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7nlfd" event={"ID":"34253a93-968b-47e2-aa0d-43ddb72f29f5","Type":"ContainerStarted","Data":"a55ad93b00780cc06317a7b9db28a3a4c7a5e17111bf25afb1a36dafa8b69089"} Jan 30 14:01:35 crc kubenswrapper[4793]: I0130 14:01:35.031800 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:35 crc kubenswrapper[4793]: I0130 14:01:35.034070 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9hvr" event={"ID":"519ea47c-0d76-44cb-af34-823c71e508c9","Type":"ContainerStarted","Data":"9ba01354ad7958c4a9de1ad88f1cde32729059ade62d1aee9109e3b563002e03"} Jan 30 14:01:35 crc kubenswrapper[4793]: I0130 14:01:35.073844 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-7nlfd" podStartSLOduration=2.07382175 podStartE2EDuration="2.07382175s" podCreationTimestamp="2026-01-30 14:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:01:35.069894355 +0000 UTC m=+1105.771242856" watchObservedRunningTime="2026-01-30 14:01:35.07382175 +0000 UTC m=+1105.775170241" Jan 30 14:01:36 crc kubenswrapper[4793]: I0130 14:01:36.047211 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9hvr" event={"ID":"519ea47c-0d76-44cb-af34-823c71e508c9","Type":"ContainerStarted","Data":"1ee9c367594a5e421e3f6c274d3afcfc88807ccc5d199b8056f6b242eb22fa63"} Jan 30 14:01:36 crc kubenswrapper[4793]: I0130 14:01:36.047519 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-g9hvr" Jan 30 14:01:36 crc kubenswrapper[4793]: I0130 14:01:36.047531 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9hvr" event={"ID":"519ea47c-0d76-44cb-af34-823c71e508c9","Type":"ContainerStarted","Data":"ca87acd46560ec991e58acc711014a3627c02fb69a2e338aecda554a575aac37"} Jan 30 14:01:40 crc kubenswrapper[4793]: I0130 14:01:40.420061 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-g9hvr" podStartSLOduration=7.420030686 podStartE2EDuration="7.420030686s" podCreationTimestamp="2026-01-30 14:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:01:36.083505741 +0000 UTC m=+1106.784854232" watchObservedRunningTime="2026-01-30 14:01:40.420030686 +0000 UTC m=+1111.121379177" Jan 30 14:01:42 crc kubenswrapper[4793]: I0130 14:01:42.092508 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" event={"ID":"e5a76649-d081-4224-baca-095ca1ffadfd","Type":"ContainerStarted","Data":"6b23b23e36b036d21c9866e86f4bd4415a7380ce763e80d79b935aeba20ce3c5"} Jan 30 14:01:42 crc kubenswrapper[4793]: I0130 14:01:42.092828 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:42 crc kubenswrapper[4793]: I0130 14:01:42.094155 4793 generic.go:334] "Generic (PLEG): container finished" podID="fd03c93b-a2a7-4a2f-9292-29c4e7fe9640" containerID="5fb7a29a436be87a8e763d75695e072acdaf8c223e4c56b2767918ce48a6729d" exitCode=0 Jan 30 14:01:42 crc kubenswrapper[4793]: I0130 14:01:42.094191 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerDied","Data":"5fb7a29a436be87a8e763d75695e072acdaf8c223e4c56b2767918ce48a6729d"} Jan 30 14:01:42 crc kubenswrapper[4793]: I0130 14:01:42.118268 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" podStartSLOduration=2.025261752 podStartE2EDuration="10.118251886s" podCreationTimestamp="2026-01-30 14:01:32 +0000 UTC" firstStartedPulling="2026-01-30 14:01:33.772978988 +0000 UTC m=+1104.474327479" lastFinishedPulling="2026-01-30 14:01:41.865969122 +0000 UTC m=+1112.567317613" observedRunningTime="2026-01-30 14:01:42.113656273 +0000 UTC m=+1112.815004764" watchObservedRunningTime="2026-01-30 14:01:42.118251886 +0000 UTC m=+1112.819600377" Jan 30 14:01:43 crc kubenswrapper[4793]: I0130 14:01:43.113639 4793 generic.go:334] "Generic (PLEG): container finished" podID="fd03c93b-a2a7-4a2f-9292-29c4e7fe9640" containerID="edb8533e88a849f1bc20730726fbe83503bc548487a645c00ef105a432a537d9" exitCode=0 Jan 30 14:01:43 crc kubenswrapper[4793]: I0130 14:01:43.114229 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerDied","Data":"edb8533e88a849f1bc20730726fbe83503bc548487a645c00ef105a432a537d9"} Jan 30 14:01:44 crc kubenswrapper[4793]: I0130 14:01:44.121720 4793 generic.go:334] "Generic (PLEG): container finished" podID="fd03c93b-a2a7-4a2f-9292-29c4e7fe9640" containerID="cc22063a433ce7648df80092a5841177f3d98616c476be6534e1f35058b90b32" exitCode=0 Jan 30 14:01:44 crc kubenswrapper[4793]: I0130 14:01:44.121767 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerDied","Data":"cc22063a433ce7648df80092a5841177f3d98616c476be6534e1f35058b90b32"} Jan 30 14:01:45 crc kubenswrapper[4793]: I0130 14:01:45.133661 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerStarted","Data":"401c2b5ead30e687f60f4190bd3d1b789c35a8d9e3ca757b388835ef5fa1fb62"} Jan 30 14:01:45 crc kubenswrapper[4793]: I0130 14:01:45.133944 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerStarted","Data":"94de661169ac29ab4c772ec6fcc3de9a07e741647366c5bd7485a59b1e993bb2"} Jan 30 14:01:45 crc kubenswrapper[4793]: I0130 14:01:45.133963 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:45 crc kubenswrapper[4793]: I0130 14:01:45.133975 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerStarted","Data":"829455f8b6a1962bc87cddb75f4aa4d1e13f7edab06ef6a93b948d66d5bbbdfe"} Jan 30 14:01:45 crc kubenswrapper[4793]: I0130 14:01:45.133987 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerStarted","Data":"494e73aa0666e7b28870c90b627b6bf761e6bff3d3a4def4e212a20175893e3a"} Jan 30 14:01:45 crc kubenswrapper[4793]: I0130 14:01:45.133999 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerStarted","Data":"8988296c867e968080559962b56c1726bac7a6dddb3743bef5827f83de1a5510"} Jan 30 14:01:45 crc kubenswrapper[4793]: I0130 14:01:45.134010 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vsdkv" event={"ID":"fd03c93b-a2a7-4a2f-9292-29c4e7fe9640","Type":"ContainerStarted","Data":"40986c71bb128f32b68cfdba9fba550c525aec47170eb0a732f261df8d267654"} Jan 30 14:01:45 crc kubenswrapper[4793]: I0130 14:01:45.171190 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-vsdkv" podStartSLOduration=5.341750885 podStartE2EDuration="13.171168819s" podCreationTimestamp="2026-01-30 14:01:32 +0000 UTC" firstStartedPulling="2026-01-30 14:01:33.997597999 +0000 UTC m=+1104.698946490" lastFinishedPulling="2026-01-30 14:01:41.827015923 +0000 UTC m=+1112.528364424" observedRunningTime="2026-01-30 14:01:45.167304984 +0000 UTC m=+1115.868653515" watchObservedRunningTime="2026-01-30 14:01:45.171168819 +0000 UTC m=+1115.872517330" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.576825 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-56nnw"] Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.592418 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.608614 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-56nnw"] Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.710239 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxjms\" (UniqueName: \"kubernetes.io/projected/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-kube-api-access-wxjms\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.710321 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-utilities\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.710356 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-catalog-content\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.811796 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-catalog-content\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.811871 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxjms\" (UniqueName: \"kubernetes.io/projected/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-kube-api-access-wxjms\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.811925 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-utilities\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.812349 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-utilities\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.812821 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-catalog-content\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.844473 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxjms\" (UniqueName: \"kubernetes.io/projected/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-kube-api-access-wxjms\") pod \"community-operators-56nnw\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:46 crc kubenswrapper[4793]: I0130 14:01:46.932736 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:47 crc kubenswrapper[4793]: I0130 14:01:47.217431 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-56nnw"] Jan 30 14:01:48 crc kubenswrapper[4793]: I0130 14:01:48.159122 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerID="af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08" exitCode=0 Jan 30 14:01:48 crc kubenswrapper[4793]: I0130 14:01:48.159253 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56nnw" event={"ID":"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf","Type":"ContainerDied","Data":"af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08"} Jan 30 14:01:48 crc kubenswrapper[4793]: I0130 14:01:48.159421 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56nnw" event={"ID":"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf","Type":"ContainerStarted","Data":"d70b5154f44309d65bfea32a6c3d3a229ac334ced2b321492bb858f4e69e0990"} Jan 30 14:01:48 crc kubenswrapper[4793]: I0130 14:01:48.887327 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:48 crc kubenswrapper[4793]: I0130 14:01:48.930413 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:01:50 crc kubenswrapper[4793]: I0130 14:01:50.172044 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerID="98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923" exitCode=0 Jan 30 14:01:50 crc kubenswrapper[4793]: I0130 14:01:50.172115 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56nnw" event={"ID":"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf","Type":"ContainerDied","Data":"98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923"} Jan 30 14:01:51 crc kubenswrapper[4793]: I0130 14:01:51.180376 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56nnw" event={"ID":"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf","Type":"ContainerStarted","Data":"e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e"} Jan 30 14:01:51 crc kubenswrapper[4793]: I0130 14:01:51.210925 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-56nnw" podStartSLOduration=2.753086346 podStartE2EDuration="5.210908455s" podCreationTimestamp="2026-01-30 14:01:46 +0000 UTC" firstStartedPulling="2026-01-30 14:01:48.161007926 +0000 UTC m=+1118.862356417" lastFinishedPulling="2026-01-30 14:01:50.618830035 +0000 UTC m=+1121.320178526" observedRunningTime="2026-01-30 14:01:51.206501398 +0000 UTC m=+1121.907849929" watchObservedRunningTime="2026-01-30 14:01:51.210908455 +0000 UTC m=+1121.912256956" Jan 30 14:01:53 crc kubenswrapper[4793]: I0130 14:01:53.276433 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4p6gx" Jan 30 14:01:54 crc kubenswrapper[4793]: I0130 14:01:54.001037 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-7nlfd" Jan 30 14:01:54 crc kubenswrapper[4793]: I0130 14:01:54.895348 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-g9hvr" Jan 30 14:01:56 crc kubenswrapper[4793]: I0130 14:01:56.933773 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:56 crc kubenswrapper[4793]: I0130 14:01:56.934014 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:56 crc kubenswrapper[4793]: I0130 14:01:56.992850 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:57 crc kubenswrapper[4793]: I0130 14:01:57.269827 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:57 crc kubenswrapper[4793]: I0130 14:01:57.306352 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-56nnw"] Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.234002 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-56nnw" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerName="registry-server" containerID="cri-o://e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e" gracePeriod=2 Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.606896 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.778823 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxjms\" (UniqueName: \"kubernetes.io/projected/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-kube-api-access-wxjms\") pod \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.779002 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-catalog-content\") pod \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.779069 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-utilities\") pod \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\" (UID: \"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf\") " Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.780161 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-utilities" (OuterVolumeSpecName: "utilities") pod "f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" (UID: "f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.788204 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-kube-api-access-wxjms" (OuterVolumeSpecName: "kube-api-access-wxjms") pod "f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" (UID: "f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf"). InnerVolumeSpecName "kube-api-access-wxjms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.837566 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" (UID: "f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.880345 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.880404 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxjms\" (UniqueName: \"kubernetes.io/projected/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-kube-api-access-wxjms\") on node \"crc\" DevicePath \"\"" Jan 30 14:01:59 crc kubenswrapper[4793]: I0130 14:01:59.880414 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.242610 4793 generic.go:334] "Generic (PLEG): container finished" podID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerID="e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e" exitCode=0 Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.242664 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56nnw" event={"ID":"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf","Type":"ContainerDied","Data":"e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e"} Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.242697 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56nnw" event={"ID":"f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf","Type":"ContainerDied","Data":"d70b5154f44309d65bfea32a6c3d3a229ac334ced2b321492bb858f4e69e0990"} Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.242720 4793 scope.go:117] "RemoveContainer" containerID="e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.242776 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56nnw" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.266498 4793 scope.go:117] "RemoveContainer" containerID="98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.276456 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-56nnw"] Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.285527 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-56nnw"] Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.299660 4793 scope.go:117] "RemoveContainer" containerID="af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.319374 4793 scope.go:117] "RemoveContainer" containerID="e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e" Jan 30 14:02:00 crc kubenswrapper[4793]: E0130 14:02:00.319956 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e\": container with ID starting with e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e not found: ID does not exist" containerID="e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.319997 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e"} err="failed to get container status \"e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e\": rpc error: code = NotFound desc = could not find container \"e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e\": container with ID starting with e3319e5cfe64abac5e75b5e5ffcde928d773d59f5bf33b6980c58201013b393e not found: ID does not exist" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.320026 4793 scope.go:117] "RemoveContainer" containerID="98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923" Jan 30 14:02:00 crc kubenswrapper[4793]: E0130 14:02:00.320522 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923\": container with ID starting with 98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923 not found: ID does not exist" containerID="98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.320566 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923"} err="failed to get container status \"98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923\": rpc error: code = NotFound desc = could not find container \"98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923\": container with ID starting with 98a04cb43d5f6ced36dd7ade48e4df3b0eb27ba74da2750a53a98a23ec28e923 not found: ID does not exist" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.320595 4793 scope.go:117] "RemoveContainer" containerID="af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08" Jan 30 14:02:00 crc kubenswrapper[4793]: E0130 14:02:00.322142 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08\": container with ID starting with af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08 not found: ID does not exist" containerID="af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.322175 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08"} err="failed to get container status \"af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08\": rpc error: code = NotFound desc = could not find container \"af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08\": container with ID starting with af286c4fd707616454b37b618046318f8f045de9c0d091fbffe8921ac2bebf08 not found: ID does not exist" Jan 30 14:02:00 crc kubenswrapper[4793]: I0130 14:02:00.405671 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" path="/var/lib/kubelet/pods/f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf/volumes" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.234840 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-x56zx"] Jan 30 14:02:01 crc kubenswrapper[4793]: E0130 14:02:01.235756 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerName="extract-content" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.235861 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerName="extract-content" Jan 30 14:02:01 crc kubenswrapper[4793]: E0130 14:02:01.235964 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerName="extract-utilities" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.236030 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerName="extract-utilities" Jan 30 14:02:01 crc kubenswrapper[4793]: E0130 14:02:01.236174 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerName="registry-server" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.236278 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerName="registry-server" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.236483 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a3b80b-f3e4-42fa-80b6-1a4129fc30bf" containerName="registry-server" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.237073 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.238886 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-sl2qr" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.240093 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.240111 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.248617 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x56zx"] Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.300083 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbt5q\" (UniqueName: \"kubernetes.io/projected/e3b6e703-4540-4739-87cd-8699d4e04903-kube-api-access-mbt5q\") pod \"openstack-operator-index-x56zx\" (UID: \"e3b6e703-4540-4739-87cd-8699d4e04903\") " pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.400959 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbt5q\" (UniqueName: \"kubernetes.io/projected/e3b6e703-4540-4739-87cd-8699d4e04903-kube-api-access-mbt5q\") pod \"openstack-operator-index-x56zx\" (UID: \"e3b6e703-4540-4739-87cd-8699d4e04903\") " pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.428230 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbt5q\" (UniqueName: \"kubernetes.io/projected/e3b6e703-4540-4739-87cd-8699d4e04903-kube-api-access-mbt5q\") pod \"openstack-operator-index-x56zx\" (UID: \"e3b6e703-4540-4739-87cd-8699d4e04903\") " pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.560462 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:01 crc kubenswrapper[4793]: I0130 14:02:01.949854 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x56zx"] Jan 30 14:02:02 crc kubenswrapper[4793]: I0130 14:02:02.264843 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x56zx" event={"ID":"e3b6e703-4540-4739-87cd-8699d4e04903","Type":"ContainerStarted","Data":"226d55b90f69dab77f9d4235c816591b31c824d25f367bf23d510f0a1936f75c"} Jan 30 14:02:03 crc kubenswrapper[4793]: I0130 14:02:03.890364 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-vsdkv" Jan 30 14:02:05 crc kubenswrapper[4793]: I0130 14:02:05.289306 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x56zx" event={"ID":"e3b6e703-4540-4739-87cd-8699d4e04903","Type":"ContainerStarted","Data":"f708679f5ce339156245bdd2a083fd4fa03d7d616c7d2d83ab2a8b5931ea4852"} Jan 30 14:02:05 crc kubenswrapper[4793]: I0130 14:02:05.305952 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-x56zx" podStartSLOduration=1.566048748 podStartE2EDuration="4.305934597s" podCreationTimestamp="2026-01-30 14:02:01 +0000 UTC" firstStartedPulling="2026-01-30 14:02:01.958360768 +0000 UTC m=+1132.659709269" lastFinishedPulling="2026-01-30 14:02:04.698246627 +0000 UTC m=+1135.399595118" observedRunningTime="2026-01-30 14:02:05.303960919 +0000 UTC m=+1136.005309440" watchObservedRunningTime="2026-01-30 14:02:05.305934597 +0000 UTC m=+1136.007283088" Jan 30 14:02:11 crc kubenswrapper[4793]: I0130 14:02:11.561648 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:11 crc kubenswrapper[4793]: I0130 14:02:11.562222 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:11 crc kubenswrapper[4793]: I0130 14:02:11.588441 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:12 crc kubenswrapper[4793]: I0130 14:02:12.350940 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-x56zx" Jan 30 14:02:12 crc kubenswrapper[4793]: I0130 14:02:12.413768 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:02:12 crc kubenswrapper[4793]: I0130 14:02:12.413821 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.469503 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l"] Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.471236 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.474761 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wlc8d" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.479669 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l"] Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.672647 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tthnl\" (UniqueName: \"kubernetes.io/projected/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-kube-api-access-tthnl\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.672715 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-bundle\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.672747 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-util\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.774034 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tthnl\" (UniqueName: \"kubernetes.io/projected/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-kube-api-access-tthnl\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.774138 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-bundle\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.774166 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-util\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.774659 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-bundle\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.774707 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-util\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:14 crc kubenswrapper[4793]: I0130 14:02:14.799870 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tthnl\" (UniqueName: \"kubernetes.io/projected/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-kube-api-access-tthnl\") pod \"bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:15 crc kubenswrapper[4793]: I0130 14:02:15.096294 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:15 crc kubenswrapper[4793]: I0130 14:02:15.511298 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l"] Jan 30 14:02:16 crc kubenswrapper[4793]: I0130 14:02:16.351486 4793 generic.go:334] "Generic (PLEG): container finished" podID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerID="917aa863c9411cd87fd6db746368b2d0374fb47ded475d4d6f1c8c96e997d0aa" exitCode=0 Jan 30 14:02:16 crc kubenswrapper[4793]: I0130 14:02:16.351593 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" event={"ID":"fa68ea40-d98a-4561-8dce-aa3e81fe5a96","Type":"ContainerDied","Data":"917aa863c9411cd87fd6db746368b2d0374fb47ded475d4d6f1c8c96e997d0aa"} Jan 30 14:02:16 crc kubenswrapper[4793]: I0130 14:02:16.351799 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" event={"ID":"fa68ea40-d98a-4561-8dce-aa3e81fe5a96","Type":"ContainerStarted","Data":"94f654f563720b16c4af45497f07bcd9437b1b119ad6e43f2e4b5fb59b7f5fa5"} Jan 30 14:02:18 crc kubenswrapper[4793]: I0130 14:02:18.370202 4793 generic.go:334] "Generic (PLEG): container finished" podID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerID="a2ca29c6644180959d91d56ab86c50b3648a6735e80885b2aa1ae3ac4af651ea" exitCode=0 Jan 30 14:02:18 crc kubenswrapper[4793]: I0130 14:02:18.370249 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" event={"ID":"fa68ea40-d98a-4561-8dce-aa3e81fe5a96","Type":"ContainerDied","Data":"a2ca29c6644180959d91d56ab86c50b3648a6735e80885b2aa1ae3ac4af651ea"} Jan 30 14:02:19 crc kubenswrapper[4793]: I0130 14:02:19.380005 4793 generic.go:334] "Generic (PLEG): container finished" podID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerID="9be403200582a47cb3f99a3a4e1fbd1249a57d1ec973d6ccd83c1f3684be0107" exitCode=0 Jan 30 14:02:19 crc kubenswrapper[4793]: I0130 14:02:19.380107 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" event={"ID":"fa68ea40-d98a-4561-8dce-aa3e81fe5a96","Type":"ContainerDied","Data":"9be403200582a47cb3f99a3a4e1fbd1249a57d1ec973d6ccd83c1f3684be0107"} Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.632028 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.751692 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tthnl\" (UniqueName: \"kubernetes.io/projected/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-kube-api-access-tthnl\") pod \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.751822 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-bundle\") pod \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.751861 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-util\") pod \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\" (UID: \"fa68ea40-d98a-4561-8dce-aa3e81fe5a96\") " Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.752387 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-bundle" (OuterVolumeSpecName: "bundle") pod "fa68ea40-d98a-4561-8dce-aa3e81fe5a96" (UID: "fa68ea40-d98a-4561-8dce-aa3e81fe5a96"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.763783 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-kube-api-access-tthnl" (OuterVolumeSpecName: "kube-api-access-tthnl") pod "fa68ea40-d98a-4561-8dce-aa3e81fe5a96" (UID: "fa68ea40-d98a-4561-8dce-aa3e81fe5a96"). InnerVolumeSpecName "kube-api-access-tthnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.768317 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-util" (OuterVolumeSpecName: "util") pod "fa68ea40-d98a-4561-8dce-aa3e81fe5a96" (UID: "fa68ea40-d98a-4561-8dce-aa3e81fe5a96"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.855786 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tthnl\" (UniqueName: \"kubernetes.io/projected/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-kube-api-access-tthnl\") on node \"crc\" DevicePath \"\"" Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.855826 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:02:20 crc kubenswrapper[4793]: I0130 14:02:20.855848 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa68ea40-d98a-4561-8dce-aa3e81fe5a96-util\") on node \"crc\" DevicePath \"\"" Jan 30 14:02:21 crc kubenswrapper[4793]: I0130 14:02:21.395994 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" event={"ID":"fa68ea40-d98a-4561-8dce-aa3e81fe5a96","Type":"ContainerDied","Data":"94f654f563720b16c4af45497f07bcd9437b1b119ad6e43f2e4b5fb59b7f5fa5"} Jan 30 14:02:21 crc kubenswrapper[4793]: I0130 14:02:21.396091 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l" Jan 30 14:02:21 crc kubenswrapper[4793]: I0130 14:02:21.396114 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94f654f563720b16c4af45497f07bcd9437b1b119ad6e43f2e4b5fb59b7f5fa5" Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.791650 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd"] Jan 30 14:02:24 crc kubenswrapper[4793]: E0130 14:02:24.792079 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerName="pull" Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.792109 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerName="pull" Jan 30 14:02:24 crc kubenswrapper[4793]: E0130 14:02:24.792127 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerName="extract" Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.792133 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerName="extract" Jan 30 14:02:24 crc kubenswrapper[4793]: E0130 14:02:24.792147 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerName="util" Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.792153 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerName="util" Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.792264 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa68ea40-d98a-4561-8dce-aa3e81fe5a96" containerName="extract" Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.792621 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.795227 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-nl4zd" Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.823829 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd"] Jan 30 14:02:24 crc kubenswrapper[4793]: I0130 14:02:24.957671 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxwdn\" (UniqueName: \"kubernetes.io/projected/2cec3782-823b-4ddf-909a-e773203cd721-kube-api-access-vxwdn\") pod \"openstack-operator-controller-init-977cfdb67-sp4rd\" (UID: \"2cec3782-823b-4ddf-909a-e773203cd721\") " pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" Jan 30 14:02:25 crc kubenswrapper[4793]: I0130 14:02:25.059535 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxwdn\" (UniqueName: \"kubernetes.io/projected/2cec3782-823b-4ddf-909a-e773203cd721-kube-api-access-vxwdn\") pod \"openstack-operator-controller-init-977cfdb67-sp4rd\" (UID: \"2cec3782-823b-4ddf-909a-e773203cd721\") " pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" Jan 30 14:02:25 crc kubenswrapper[4793]: I0130 14:02:25.088118 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxwdn\" (UniqueName: \"kubernetes.io/projected/2cec3782-823b-4ddf-909a-e773203cd721-kube-api-access-vxwdn\") pod \"openstack-operator-controller-init-977cfdb67-sp4rd\" (UID: \"2cec3782-823b-4ddf-909a-e773203cd721\") " pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" Jan 30 14:02:25 crc kubenswrapper[4793]: I0130 14:02:25.109759 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" Jan 30 14:02:25 crc kubenswrapper[4793]: I0130 14:02:25.644285 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd"] Jan 30 14:02:26 crc kubenswrapper[4793]: I0130 14:02:26.429610 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" event={"ID":"2cec3782-823b-4ddf-909a-e773203cd721","Type":"ContainerStarted","Data":"36eba34a55476c58bc4d8b188b293d9323ab5932c2ff24e77e6d450f745e8661"} Jan 30 14:02:30 crc kubenswrapper[4793]: I0130 14:02:30.469430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" event={"ID":"2cec3782-823b-4ddf-909a-e773203cd721","Type":"ContainerStarted","Data":"c859e3c068c9bd897c5311ad1b1ea39e519eae368b7bbe2936f5bf181bbf8c4b"} Jan 30 14:02:30 crc kubenswrapper[4793]: I0130 14:02:30.470375 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" Jan 30 14:02:30 crc kubenswrapper[4793]: I0130 14:02:30.505694 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" podStartSLOduration=2.131778723 podStartE2EDuration="6.505678078s" podCreationTimestamp="2026-01-30 14:02:24 +0000 UTC" firstStartedPulling="2026-01-30 14:02:25.670553309 +0000 UTC m=+1156.371901810" lastFinishedPulling="2026-01-30 14:02:30.044452624 +0000 UTC m=+1160.745801165" observedRunningTime="2026-01-30 14:02:30.50086212 +0000 UTC m=+1161.202210621" watchObservedRunningTime="2026-01-30 14:02:30.505678078 +0000 UTC m=+1161.207026569" Jan 30 14:02:35 crc kubenswrapper[4793]: I0130 14:02:35.113776 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-977cfdb67-sp4rd" Jan 30 14:02:42 crc kubenswrapper[4793]: I0130 14:02:42.414347 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:02:42 crc kubenswrapper[4793]: I0130 14:02:42.414884 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.225921 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.227195 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.231867 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-2zhj7" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.240679 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.241471 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.251253 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-z8x8b" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.270040 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.270893 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.275972 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-slkkb" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.307960 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.317662 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-g5848"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.318372 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.324013 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-l44rg" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.326983 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl4hd\" (UniqueName: \"kubernetes.io/projected/1d859404-a29c-46c9-b66a-fed5ff0b13f0-kube-api-access-jl4hd\") pod \"glance-operator-controller-manager-8886f4c47-g5848\" (UID: \"1d859404-a29c-46c9-b66a-fed5ff0b13f0\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.327140 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8bkt\" (UniqueName: \"kubernetes.io/projected/8835e5d9-c37d-4744-95cb-c56c10a58647-kube-api-access-l8bkt\") pod \"cinder-operator-controller-manager-8d874c8fc-9kwwr\" (UID: \"8835e5d9-c37d-4744-95cb-c56c10a58647\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.327161 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tw28\" (UniqueName: \"kubernetes.io/projected/ec981da4-a3ba-4e4e-a0eb-2168ab79fe77-kube-api-access-5tw28\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-8bg6c\" (UID: \"ec981da4-a3ba-4e4e-a0eb-2168ab79fe77\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.327233 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmpv7\" (UniqueName: \"kubernetes.io/projected/6f991e04-2db3-4b32-bc83-8bbce4ce7a08-kube-api-access-wmpv7\") pod \"designate-operator-controller-manager-6d9697b7f4-hjpkr\" (UID: \"6f991e04-2db3-4b32-bc83-8bbce4ce7a08\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.348931 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.349812 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.352917 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-fdblm" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.377160 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.377952 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.381936 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-pbsph" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.385106 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.397203 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.422396 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-khfs7"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.423290 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.427650 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ct9pn" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.427800 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.428756 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.430142 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j72j\" (UniqueName: \"kubernetes.io/projected/8d24cd33-2902-424a-8ffc-76b1e4c2f482-kube-api-access-9j72j\") pod \"heat-operator-controller-manager-69d6db494d-k4tz9\" (UID: \"8d24cd33-2902-424a-8ffc-76b1e4c2f482\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.430320 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ptwr\" (UniqueName: \"kubernetes.io/projected/710c57e4-a09e-4db1-a03b-13db05085d41-kube-api-access-4ptwr\") pod \"horizon-operator-controller-manager-5fb775575f-m4q78\" (UID: \"710c57e4-a09e-4db1-a03b-13db05085d41\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.430412 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bkt\" (UniqueName: \"kubernetes.io/projected/8835e5d9-c37d-4744-95cb-c56c10a58647-kube-api-access-l8bkt\") pod \"cinder-operator-controller-manager-8d874c8fc-9kwwr\" (UID: \"8835e5d9-c37d-4744-95cb-c56c10a58647\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.430497 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tw28\" (UniqueName: \"kubernetes.io/projected/ec981da4-a3ba-4e4e-a0eb-2168ab79fe77-kube-api-access-5tw28\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-8bg6c\" (UID: \"ec981da4-a3ba-4e4e-a0eb-2168ab79fe77\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.430598 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vzd2\" (UniqueName: \"kubernetes.io/projected/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-kube-api-access-7vzd2\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.430711 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmpv7\" (UniqueName: \"kubernetes.io/projected/6f991e04-2db3-4b32-bc83-8bbce4ce7a08-kube-api-access-wmpv7\") pod \"designate-operator-controller-manager-6d9697b7f4-hjpkr\" (UID: \"6f991e04-2db3-4b32-bc83-8bbce4ce7a08\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.430811 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl4hd\" (UniqueName: \"kubernetes.io/projected/1d859404-a29c-46c9-b66a-fed5ff0b13f0-kube-api-access-jl4hd\") pod \"glance-operator-controller-manager-8886f4c47-g5848\" (UID: \"1d859404-a29c-46c9-b66a-fed5ff0b13f0\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.441191 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-khfs7"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.469913 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.475438 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8bkt\" (UniqueName: \"kubernetes.io/projected/8835e5d9-c37d-4744-95cb-c56c10a58647-kube-api-access-l8bkt\") pod \"cinder-operator-controller-manager-8d874c8fc-9kwwr\" (UID: \"8835e5d9-c37d-4744-95cb-c56c10a58647\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.489390 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-g5848"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.489923 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmpv7\" (UniqueName: \"kubernetes.io/projected/6f991e04-2db3-4b32-bc83-8bbce4ce7a08-kube-api-access-wmpv7\") pod \"designate-operator-controller-manager-6d9697b7f4-hjpkr\" (UID: \"6f991e04-2db3-4b32-bc83-8bbce4ce7a08\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.492860 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.515456 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.520876 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl4hd\" (UniqueName: \"kubernetes.io/projected/1d859404-a29c-46c9-b66a-fed5ff0b13f0-kube-api-access-jl4hd\") pod \"glance-operator-controller-manager-8886f4c47-g5848\" (UID: \"1d859404-a29c-46c9-b66a-fed5ff0b13f0\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.521487 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tw28\" (UniqueName: \"kubernetes.io/projected/ec981da4-a3ba-4e4e-a0eb-2168ab79fe77-kube-api-access-5tw28\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-8bg6c\" (UID: \"ec981da4-a3ba-4e4e-a0eb-2168ab79fe77\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.542920 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-2xtcj" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.552470 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.568668 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vzd2\" (UniqueName: \"kubernetes.io/projected/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-kube-api-access-7vzd2\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.568840 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.568870 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j72j\" (UniqueName: \"kubernetes.io/projected/8d24cd33-2902-424a-8ffc-76b1e4c2f482-kube-api-access-9j72j\") pod \"heat-operator-controller-manager-69d6db494d-k4tz9\" (UID: \"8d24cd33-2902-424a-8ffc-76b1e4c2f482\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.568952 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ptwr\" (UniqueName: \"kubernetes.io/projected/710c57e4-a09e-4db1-a03b-13db05085d41-kube-api-access-4ptwr\") pod \"horizon-operator-controller-manager-5fb775575f-m4q78\" (UID: \"710c57e4-a09e-4db1-a03b-13db05085d41\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" Jan 30 14:02:52 crc kubenswrapper[4793]: E0130 14:02:52.573038 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 14:02:52 crc kubenswrapper[4793]: E0130 14:02:52.575478 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert podName:97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642 nodeName:}" failed. No retries permitted until 2026-01-30 14:02:53.075442347 +0000 UTC m=+1183.776790848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert") pod "infra-operator-controller-manager-79955696d6-khfs7" (UID: "97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642") : secret "infra-operator-webhook-server-cert" not found Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.577360 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.628553 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.634627 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.657312 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.676448 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdkv6\" (UniqueName: \"kubernetes.io/projected/7c34e714-0f18-4e41-ab9c-1dfe4859e644-kube-api-access-pdkv6\") pod \"ironic-operator-controller-manager-5f4b8bd54d-v77jx\" (UID: \"7c34e714-0f18-4e41-ab9c-1dfe4859e644\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.677113 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.679143 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.679637 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j72j\" (UniqueName: \"kubernetes.io/projected/8d24cd33-2902-424a-8ffc-76b1e4c2f482-kube-api-access-9j72j\") pod \"heat-operator-controller-manager-69d6db494d-k4tz9\" (UID: \"8d24cd33-2902-424a-8ffc-76b1e4c2f482\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.679658 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ptwr\" (UniqueName: \"kubernetes.io/projected/710c57e4-a09e-4db1-a03b-13db05085d41-kube-api-access-4ptwr\") pod \"horizon-operator-controller-manager-5fb775575f-m4q78\" (UID: \"710c57e4-a09e-4db1-a03b-13db05085d41\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.679946 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.690226 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.694509 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-b5dsj" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.697110 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vzd2\" (UniqueName: \"kubernetes.io/projected/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-kube-api-access-7vzd2\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.700292 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.701247 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.709482 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.710403 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-7kdf8" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.729581 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.730331 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.742353 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-ql2x2" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.766501 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.782669 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.783359 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.791498 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.802607 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-7nrsc" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.804409 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.809296 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qctdf\" (UniqueName: \"kubernetes.io/projected/bdcd04f7-09fa-4b1b-8b99-3de61a28a337-kube-api-access-qctdf\") pod \"keystone-operator-controller-manager-84f48565d4-82cvq\" (UID: \"bdcd04f7-09fa-4b1b-8b99-3de61a28a337\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.809333 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kkgj\" (UniqueName: \"kubernetes.io/projected/05415bc7-22dc-4b15-a047-6ed62755638d-kube-api-access-6kkgj\") pod \"neutron-operator-controller-manager-585dbc889-x6pk6\" (UID: \"05415bc7-22dc-4b15-a047-6ed62755638d\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.809357 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdkv6\" (UniqueName: \"kubernetes.io/projected/7c34e714-0f18-4e41-ab9c-1dfe4859e644-kube-api-access-pdkv6\") pod \"ironic-operator-controller-manager-5f4b8bd54d-v77jx\" (UID: \"7c34e714-0f18-4e41-ab9c-1dfe4859e644\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.809450 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n66zm\" (UniqueName: \"kubernetes.io/projected/ce9be14f-8255-421e-91b4-a30fc5482ff4-kube-api-access-n66zm\") pod \"manila-operator-controller-manager-7dd968899f-9ftxd\" (UID: \"ce9be14f-8255-421e-91b4-a30fc5482ff4\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.847622 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.848469 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.851824 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-qrbz9" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.868741 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.911577 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdkv6\" (UniqueName: \"kubernetes.io/projected/7c34e714-0f18-4e41-ab9c-1dfe4859e644-kube-api-access-pdkv6\") pod \"ironic-operator-controller-manager-5f4b8bd54d-v77jx\" (UID: \"7c34e714-0f18-4e41-ab9c-1dfe4859e644\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.912263 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zntmr\" (UniqueName: \"kubernetes.io/projected/31ca6ac1-d2da-4325-baa4-e18fc3514721-kube-api-access-zntmr\") pod \"nova-operator-controller-manager-55bff696bd-vtx9d\" (UID: \"31ca6ac1-d2da-4325-baa4-e18fc3514721\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.927275 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n66zm\" (UniqueName: \"kubernetes.io/projected/ce9be14f-8255-421e-91b4-a30fc5482ff4-kube-api-access-n66zm\") pod \"manila-operator-controller-manager-7dd968899f-9ftxd\" (UID: \"ce9be14f-8255-421e-91b4-a30fc5482ff4\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.928038 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qctdf\" (UniqueName: \"kubernetes.io/projected/bdcd04f7-09fa-4b1b-8b99-3de61a28a337-kube-api-access-qctdf\") pod \"keystone-operator-controller-manager-84f48565d4-82cvq\" (UID: \"bdcd04f7-09fa-4b1b-8b99-3de61a28a337\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.928227 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kkgj\" (UniqueName: \"kubernetes.io/projected/05415bc7-22dc-4b15-a047-6ed62755638d-kube-api-access-6kkgj\") pod \"neutron-operator-controller-manager-585dbc889-x6pk6\" (UID: \"05415bc7-22dc-4b15-a047-6ed62755638d\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.928417 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7wj8\" (UniqueName: \"kubernetes.io/projected/fa88d14c-0581-439c-9da1-f1123e41a65a-kube-api-access-t7wj8\") pod \"mariadb-operator-controller-manager-67bf948998-n29l5\" (UID: \"fa88d14c-0581-439c-9da1-f1123e41a65a\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.916204 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.929671 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.934787 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-xdkjq" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.934906 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.959859 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.960820 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.964435 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n66zm\" (UniqueName: \"kubernetes.io/projected/ce9be14f-8255-421e-91b4-a30fc5482ff4-kube-api-access-n66zm\") pod \"manila-operator-controller-manager-7dd968899f-9ftxd\" (UID: \"ce9be14f-8255-421e-91b4-a30fc5482ff4\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.968861 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.974960 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.975987 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.979950 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.987140 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-szxt7" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.987171 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.987321 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-spknc" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.991239 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qctdf\" (UniqueName: \"kubernetes.io/projected/bdcd04f7-09fa-4b1b-8b99-3de61a28a337-kube-api-access-qctdf\") pod \"keystone-operator-controller-manager-84f48565d4-82cvq\" (UID: \"bdcd04f7-09fa-4b1b-8b99-3de61a28a337\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.993289 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt"] Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.994073 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kkgj\" (UniqueName: \"kubernetes.io/projected/05415bc7-22dc-4b15-a047-6ed62755638d-kube-api-access-6kkgj\") pod \"neutron-operator-controller-manager-585dbc889-x6pk6\" (UID: \"05415bc7-22dc-4b15-a047-6ed62755638d\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" Jan 30 14:02:52 crc kubenswrapper[4793]: I0130 14:02:52.994407 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:52.997800 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-v7v88" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.019379 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.020225 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.022626 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.023617 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-ssfbg" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.029550 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7wj8\" (UniqueName: \"kubernetes.io/projected/fa88d14c-0581-439c-9da1-f1123e41a65a-kube-api-access-t7wj8\") pod \"mariadb-operator-controller-manager-67bf948998-n29l5\" (UID: \"fa88d14c-0581-439c-9da1-f1123e41a65a\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.029607 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffcgl\" (UniqueName: \"kubernetes.io/projected/53576ec8-2f6d-4781-8906-726529cc6049-kube-api-access-ffcgl\") pod \"octavia-operator-controller-manager-6687f8d877-5nsr4\" (UID: \"53576ec8-2f6d-4781-8906-726529cc6049\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.029639 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.029661 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zntmr\" (UniqueName: \"kubernetes.io/projected/31ca6ac1-d2da-4325-baa4-e18fc3514721-kube-api-access-zntmr\") pod \"nova-operator-controller-manager-55bff696bd-vtx9d\" (UID: \"31ca6ac1-d2da-4325-baa4-e18fc3514721\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.029709 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trq5g\" (UniqueName: \"kubernetes.io/projected/6231ed92-57a8-4c48-9c75-e916940b22ea-kube-api-access-trq5g\") pod \"ovn-operator-controller-manager-788c46999f-4ml88\" (UID: \"6231ed92-57a8-4c48-9c75-e916940b22ea\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.029747 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn6f8\" (UniqueName: \"kubernetes.io/projected/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-kube-api-access-rn6f8\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.039599 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.057574 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.060348 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.070586 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zntmr\" (UniqueName: \"kubernetes.io/projected/31ca6ac1-d2da-4325-baa4-e18fc3514721-kube-api-access-zntmr\") pod \"nova-operator-controller-manager-55bff696bd-vtx9d\" (UID: \"31ca6ac1-d2da-4325-baa4-e18fc3514721\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.071303 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7wj8\" (UniqueName: \"kubernetes.io/projected/fa88d14c-0581-439c-9da1-f1123e41a65a-kube-api-access-t7wj8\") pod \"mariadb-operator-controller-manager-67bf948998-n29l5\" (UID: \"fa88d14c-0581-439c-9da1-f1123e41a65a\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.087435 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.105868 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.117278 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.120979 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.131806 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trq5g\" (UniqueName: \"kubernetes.io/projected/6231ed92-57a8-4c48-9c75-e916940b22ea-kube-api-access-trq5g\") pod \"ovn-operator-controller-manager-788c46999f-4ml88\" (UID: \"6231ed92-57a8-4c48-9c75-e916940b22ea\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.131870 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.131917 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn6f8\" (UniqueName: \"kubernetes.io/projected/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-kube-api-access-rn6f8\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.131972 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffcgl\" (UniqueName: \"kubernetes.io/projected/53576ec8-2f6d-4781-8906-726529cc6049-kube-api-access-ffcgl\") pod \"octavia-operator-controller-manager-6687f8d877-5nsr4\" (UID: \"53576ec8-2f6d-4781-8906-726529cc6049\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.132036 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.132084 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmk66\" (UniqueName: \"kubernetes.io/projected/02b8e60c-3514-4d72-bde6-5af374a926b1-kube-api-access-jmk66\") pod \"placement-operator-controller-manager-5b964cf4cd-27flx\" (UID: \"02b8e60c-3514-4d72-bde6-5af374a926b1\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.132120 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7kmh\" (UniqueName: \"kubernetes.io/projected/3eb94c51-d506-4273-898b-dba537cabea6-kube-api-access-b7kmh\") pod \"swift-operator-controller-manager-68fc8c869-vxhpt\" (UID: \"3eb94c51-d506-4273-898b-dba537cabea6\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.132363 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-b45s5" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.132519 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr"] Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.132658 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.132714 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert podName:97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642 nodeName:}" failed. No retries permitted until 2026-01-30 14:02:54.132691284 +0000 UTC m=+1184.834039775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert") pod "infra-operator-controller-manager-79955696d6-khfs7" (UID: "97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642") : secret "infra-operator-webhook-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.132997 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.133078 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert podName:e446e97c-6e9f-4dc2-b5fd-fb63451fd326 nodeName:}" failed. No retries permitted until 2026-01-30 14:02:53.633038712 +0000 UTC m=+1184.334387293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" (UID: "e446e97c-6e9f-4dc2-b5fd-fb63451fd326") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.156411 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.157375 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.205319 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.216463 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.226440 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.227837 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn6f8\" (UniqueName: \"kubernetes.io/projected/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-kube-api-access-rn6f8\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.229150 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9ldd5" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.235841 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmk66\" (UniqueName: \"kubernetes.io/projected/02b8e60c-3514-4d72-bde6-5af374a926b1-kube-api-access-jmk66\") pod \"placement-operator-controller-manager-5b964cf4cd-27flx\" (UID: \"02b8e60c-3514-4d72-bde6-5af374a926b1\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.245172 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7kmh\" (UniqueName: \"kubernetes.io/projected/3eb94c51-d506-4273-898b-dba537cabea6-kube-api-access-b7kmh\") pod \"swift-operator-controller-manager-68fc8c869-vxhpt\" (UID: \"3eb94c51-d506-4273-898b-dba537cabea6\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.245360 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvwb8\" (UniqueName: \"kubernetes.io/projected/5e215cef-de14-424d-9028-a48bad979192-kube-api-access-nvwb8\") pod \"test-operator-controller-manager-56f8bfcd9f-qb5xp\" (UID: \"5e215cef-de14-424d-9028-a48bad979192\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.245456 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn565\" (UniqueName: \"kubernetes.io/projected/6b21b0ca-d506-4b1b-b6e1-06e2a96ae033-kube-api-access-qn565\") pod \"telemetry-operator-controller-manager-64b5b76f97-tv5vr\" (UID: \"6b21b0ca-d506-4b1b-b6e1-06e2a96ae033\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.246596 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trq5g\" (UniqueName: \"kubernetes.io/projected/6231ed92-57a8-4c48-9c75-e916940b22ea-kube-api-access-trq5g\") pod \"ovn-operator-controller-manager-788c46999f-4ml88\" (UID: \"6231ed92-57a8-4c48-9c75-e916940b22ea\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.257710 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-btjpp"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.262290 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffcgl\" (UniqueName: \"kubernetes.io/projected/53576ec8-2f6d-4781-8906-726529cc6049-kube-api-access-ffcgl\") pod \"octavia-operator-controller-manager-6687f8d877-5nsr4\" (UID: \"53576ec8-2f6d-4781-8906-726529cc6049\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.269425 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmk66\" (UniqueName: \"kubernetes.io/projected/02b8e60c-3514-4d72-bde6-5af374a926b1-kube-api-access-jmk66\") pod \"placement-operator-controller-manager-5b964cf4cd-27flx\" (UID: \"02b8e60c-3514-4d72-bde6-5af374a926b1\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.309869 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.311756 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.316293 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.320662 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.324929 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9qrnc" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.335439 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-btjpp"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.346856 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpqqw\" (UniqueName: \"kubernetes.io/projected/f65e9448-ee4e-4f22-9bd7-ecf650cb36b5-kube-api-access-lpqqw\") pod \"watcher-operator-controller-manager-564965969-btjpp\" (UID: \"f65e9448-ee4e-4f22-9bd7-ecf650cb36b5\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.346919 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvwb8\" (UniqueName: \"kubernetes.io/projected/5e215cef-de14-424d-9028-a48bad979192-kube-api-access-nvwb8\") pod \"test-operator-controller-manager-56f8bfcd9f-qb5xp\" (UID: \"5e215cef-de14-424d-9028-a48bad979192\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.346943 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn565\" (UniqueName: \"kubernetes.io/projected/6b21b0ca-d506-4b1b-b6e1-06e2a96ae033-kube-api-access-qn565\") pod \"telemetry-operator-controller-manager-64b5b76f97-tv5vr\" (UID: \"6b21b0ca-d506-4b1b-b6e1-06e2a96ae033\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.349286 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7kmh\" (UniqueName: \"kubernetes.io/projected/3eb94c51-d506-4273-898b-dba537cabea6-kube-api-access-b7kmh\") pod \"swift-operator-controller-manager-68fc8c869-vxhpt\" (UID: \"3eb94c51-d506-4273-898b-dba537cabea6\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.368892 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn565\" (UniqueName: \"kubernetes.io/projected/6b21b0ca-d506-4b1b-b6e1-06e2a96ae033-kube-api-access-qn565\") pod \"telemetry-operator-controller-manager-64b5b76f97-tv5vr\" (UID: \"6b21b0ca-d506-4b1b-b6e1-06e2a96ae033\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.377175 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvwb8\" (UniqueName: \"kubernetes.io/projected/5e215cef-de14-424d-9028-a48bad979192-kube-api-access-nvwb8\") pod \"test-operator-controller-manager-56f8bfcd9f-qb5xp\" (UID: \"5e215cef-de14-424d-9028-a48bad979192\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.377953 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.388938 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.408950 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.433496 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.434625 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.438798 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.439007 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-95jx4" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.439310 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.448394 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpqqw\" (UniqueName: \"kubernetes.io/projected/f65e9448-ee4e-4f22-9bd7-ecf650cb36b5-kube-api-access-lpqqw\") pod \"watcher-operator-controller-manager-564965969-btjpp\" (UID: \"f65e9448-ee4e-4f22-9bd7-ecf650cb36b5\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.462366 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.491751 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpqqw\" (UniqueName: \"kubernetes.io/projected/f65e9448-ee4e-4f22-9bd7-ecf650cb36b5-kube-api-access-lpqqw\") pod \"watcher-operator-controller-manager-564965969-btjpp\" (UID: \"f65e9448-ee4e-4f22-9bd7-ecf650cb36b5\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.518525 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.519438 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.526093 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-pzq5g" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.550943 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.550994 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dxt7\" (UniqueName: \"kubernetes.io/projected/e9854850-e645-4364-a471-bef994f8536c-kube-api-access-6dxt7\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.551013 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fdwz\" (UniqueName: \"kubernetes.io/projected/2aae677d-830b-44b8-a792-3d0b527aee89-kube-api-access-5fdwz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nb4g2\" (UID: \"2aae677d-830b-44b8-a792-3d0b527aee89\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.551039 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.551174 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.583516 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.656624 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.656925 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.656955 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dxt7\" (UniqueName: \"kubernetes.io/projected/e9854850-e645-4364-a471-bef994f8536c-kube-api-access-6dxt7\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.656976 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fdwz\" (UniqueName: \"kubernetes.io/projected/2aae677d-830b-44b8-a792-3d0b527aee89-kube-api-access-5fdwz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nb4g2\" (UID: \"2aae677d-830b-44b8-a792-3d0b527aee89\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.657000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.657155 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.657200 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:02:54.157186298 +0000 UTC m=+1184.858534789 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "metrics-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.657430 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.657461 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert podName:e446e97c-6e9f-4dc2-b5fd-fb63451fd326 nodeName:}" failed. No retries permitted until 2026-01-30 14:02:54.657445454 +0000 UTC m=+1185.358793945 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" (UID: "e446e97c-6e9f-4dc2-b5fd-fb63451fd326") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.657496 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: E0130 14:02:53.657514 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:02:54.157507995 +0000 UTC m=+1184.858856486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "webhook-server-cert" not found Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.687694 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.689879 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dxt7\" (UniqueName: \"kubernetes.io/projected/e9854850-e645-4364-a471-bef994f8536c-kube-api-access-6dxt7\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.690466 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fdwz\" (UniqueName: \"kubernetes.io/projected/2aae677d-830b-44b8-a792-3d0b527aee89-kube-api-access-5fdwz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nb4g2\" (UID: \"2aae677d-830b-44b8-a792-3d0b527aee89\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.786004 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.791076 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.827697 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr"] Jan 30 14:02:53 crc kubenswrapper[4793]: I0130 14:02:53.911375 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.057194 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-g5848"] Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.169685 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.169743 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.169774 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.169941 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.169945 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.169968 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.169993 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert podName:97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642 nodeName:}" failed. No retries permitted until 2026-01-30 14:02:56.1699793 +0000 UTC m=+1186.871327781 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert") pod "infra-operator-controller-manager-79955696d6-khfs7" (UID: "97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642") : secret "infra-operator-webhook-server-cert" not found Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.170025 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:02:55.170001091 +0000 UTC m=+1185.871349642 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "webhook-server-cert" not found Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.170106 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:02:55.170038521 +0000 UTC m=+1185.871387112 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "metrics-server-cert" not found Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.436773 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx"] Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.468600 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9"] Jan 30 14:02:54 crc kubenswrapper[4793]: W0130 14:02:54.559435 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d24cd33_2902_424a_8ffc_76b1e4c2f482.slice/crio-48ac2860bb733077b18c0f7b9e3c3f267cbb64d710035863cff6a4b356598560 WatchSource:0}: Error finding container 48ac2860bb733077b18c0f7b9e3c3f267cbb64d710035863cff6a4b356598560: Status 404 returned error can't find the container with id 48ac2860bb733077b18c0f7b9e3c3f267cbb64d710035863cff6a4b356598560 Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.600571 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd"] Jan 30 14:02:54 crc kubenswrapper[4793]: W0130 14:02:54.614170 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa88d14c_0581_439c_9da1_f1123e41a65a.slice/crio-0ea2d720f25ea0934ae07c6c4aecec4c0b367e3c1e17238c45915bcf529368a4 WatchSource:0}: Error finding container 0ea2d720f25ea0934ae07c6c4aecec4c0b367e3c1e17238c45915bcf529368a4: Status 404 returned error can't find the container with id 0ea2d720f25ea0934ae07c6c4aecec4c0b367e3c1e17238c45915bcf529368a4 Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.632711 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr"] Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.659548 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5"] Jan 30 14:02:54 crc kubenswrapper[4793]: W0130 14:02:54.662323 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53576ec8_2f6d_4781_8906_726529cc6049.slice/crio-d1968bfeff04a0b1986aa9ac08d280acb41307dfac2d7259328a41885c81e2af WatchSource:0}: Error finding container d1968bfeff04a0b1986aa9ac08d280acb41307dfac2d7259328a41885c81e2af: Status 404 returned error can't find the container with id d1968bfeff04a0b1986aa9ac08d280acb41307dfac2d7259328a41885c81e2af Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.670594 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq"] Jan 30 14:02:54 crc kubenswrapper[4793]: W0130 14:02:54.684149 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05415bc7_22dc_4b15_a047_6ed62755638d.slice/crio-2b7a2176b3a78e18459fcda69d964cd416be021042c9434344a5670b8442e826 WatchSource:0}: Error finding container 2b7a2176b3a78e18459fcda69d964cd416be021042c9434344a5670b8442e826: Status 404 returned error can't find the container with id 2b7a2176b3a78e18459fcda69d964cd416be021042c9434344a5670b8442e826 Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.687594 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4"] Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.688204 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.688392 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.689130 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert podName:e446e97c-6e9f-4dc2-b5fd-fb63451fd326 nodeName:}" failed. No retries permitted until 2026-01-30 14:02:56.68853153 +0000 UTC m=+1187.389880021 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" (UID: "e446e97c-6e9f-4dc2-b5fd-fb63451fd326") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:02:54 crc kubenswrapper[4793]: W0130 14:02:54.690470 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6231ed92_57a8_4c48_9c75_e916940b22ea.slice/crio-b94380d56581da8ae4c362a87f5e421d4e3294ab6840718c1ebed01f8c023673 WatchSource:0}: Error finding container b94380d56581da8ae4c362a87f5e421d4e3294ab6840718c1ebed01f8c023673: Status 404 returned error can't find the container with id b94380d56581da8ae4c362a87f5e421d4e3294ab6840718c1ebed01f8c023673 Jan 30 14:02:54 crc kubenswrapper[4793]: W0130 14:02:54.694957 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2aae677d_830b_44b8_a792_3d0b527aee89.slice/crio-d143199e1760599c54c05f152a8283ce45bdf63e384aabcbfa5d551bd5be9816 WatchSource:0}: Error finding container d143199e1760599c54c05f152a8283ce45bdf63e384aabcbfa5d551bd5be9816: Status 404 returned error can't find the container with id d143199e1760599c54c05f152a8283ce45bdf63e384aabcbfa5d551bd5be9816 Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.701460 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d"] Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.715357 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6"] Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.721471 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-trq5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-4ml88_openstack-operators(6231ed92-57a8-4c48-9c75-e916940b22ea): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.723407 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" podUID="6231ed92-57a8-4c48-9c75-e916940b22ea" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.738761 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88"] Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.745261 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5fdwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-nb4g2_openstack-operators(2aae677d-830b-44b8-a792-3d0b527aee89): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 14:02:54 crc kubenswrapper[4793]: W0130 14:02:54.746565 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b21b0ca_d506_4b1b_b6e1_06e2a96ae033.slice/crio-9567dbcf6ae9046e00f013cb713d398f01b6af499987ce9bb46806a656bf7a7c WatchSource:0}: Error finding container 9567dbcf6ae9046e00f013cb713d398f01b6af499987ce9bb46806a656bf7a7c: Status 404 returned error can't find the container with id 9567dbcf6ae9046e00f013cb713d398f01b6af499987ce9bb46806a656bf7a7c Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.746615 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" podUID="2aae677d-830b-44b8-a792-3d0b527aee89" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.758261 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2"] Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.759010 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qn565,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-tv5vr_openstack-operators(6b21b0ca-d506-4b1b-b6e1-06e2a96ae033): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.760096 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" podUID="6b21b0ca-d506-4b1b-b6e1-06e2a96ae033" Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.762715 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nvwb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-qb5xp_openstack-operators(5e215cef-de14-424d-9028-a48bad979192): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.763780 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" podUID="5e215cef-de14-424d-9028-a48bad979192" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.764815 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" event={"ID":"53576ec8-2f6d-4781-8906-726529cc6049","Type":"ContainerStarted","Data":"d1968bfeff04a0b1986aa9ac08d280acb41307dfac2d7259328a41885c81e2af"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.766715 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx"] Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.768301 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" event={"ID":"bdcd04f7-09fa-4b1b-8b99-3de61a28a337","Type":"ContainerStarted","Data":"1d38710c86fb5e192aeb14540956d24656db1d48954b833c32e36e4cb9ce5b0d"} Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.768537 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lpqqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-btjpp_openstack-operators(f65e9448-ee4e-4f22-9bd7-ecf650cb36b5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.769644 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" podUID="f65e9448-ee4e-4f22-9bd7-ecf650cb36b5" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.770792 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" event={"ID":"ec981da4-a3ba-4e4e-a0eb-2168ab79fe77","Type":"ContainerStarted","Data":"f65d33231af656e5de4501b44ce1101798fdfa11173e1a209361899a47b40899"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.771598 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" event={"ID":"8d24cd33-2902-424a-8ffc-76b1e4c2f482","Type":"ContainerStarted","Data":"48ac2860bb733077b18c0f7b9e3c3f267cbb64d710035863cff6a4b356598560"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.773194 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr"] Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.777631 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp"] Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.777853 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" event={"ID":"fa88d14c-0581-439c-9da1-f1123e41a65a","Type":"ContainerStarted","Data":"0ea2d720f25ea0934ae07c6c4aecec4c0b367e3c1e17238c45915bcf529368a4"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.781875 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" event={"ID":"6b21b0ca-d506-4b1b-b6e1-06e2a96ae033","Type":"ContainerStarted","Data":"9567dbcf6ae9046e00f013cb713d398f01b6af499987ce9bb46806a656bf7a7c"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.787588 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-btjpp"] Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.789393 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" podUID="6b21b0ca-d506-4b1b-b6e1-06e2a96ae033" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.790792 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" event={"ID":"6231ed92-57a8-4c48-9c75-e916940b22ea","Type":"ContainerStarted","Data":"b94380d56581da8ae4c362a87f5e421d4e3294ab6840718c1ebed01f8c023673"} Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.795835 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" podUID="6231ed92-57a8-4c48-9c75-e916940b22ea" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.795932 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" event={"ID":"6f991e04-2db3-4b32-bc83-8bbce4ce7a08","Type":"ContainerStarted","Data":"084c7f30b9ee8d5a0ee3b2f434e8e027007bd69df096a48cdd3517c90f12da7b"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.802997 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" event={"ID":"8835e5d9-c37d-4744-95cb-c56c10a58647","Type":"ContainerStarted","Data":"9b949a5b3cef31ec223df871bf3608c5eae084926f27907344d81bbc74673679"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.807928 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" event={"ID":"31ca6ac1-d2da-4325-baa4-e18fc3514721","Type":"ContainerStarted","Data":"be06a659c6d0e3fe4725ba323ec3085bbf746717b68d98d5bfe8acd5fa8709b8"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.809377 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" event={"ID":"1d859404-a29c-46c9-b66a-fed5ff0b13f0","Type":"ContainerStarted","Data":"903b1f5dd62c9bd3678b966b8221e9010776913365c5395a18b2d8922f047686"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.810800 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" event={"ID":"02b8e60c-3514-4d72-bde6-5af374a926b1","Type":"ContainerStarted","Data":"54c0133d98303667573b43bf7596ca633f8ec91b36f920d837da801afa6f8e99"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.814001 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" event={"ID":"05415bc7-22dc-4b15-a047-6ed62755638d","Type":"ContainerStarted","Data":"2b7a2176b3a78e18459fcda69d964cd416be021042c9434344a5670b8442e826"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.816203 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" event={"ID":"2aae677d-830b-44b8-a792-3d0b527aee89","Type":"ContainerStarted","Data":"d143199e1760599c54c05f152a8283ce45bdf63e384aabcbfa5d551bd5be9816"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.816705 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt"] Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.818665 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" podUID="2aae677d-830b-44b8-a792-3d0b527aee89" Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.819625 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" event={"ID":"710c57e4-a09e-4db1-a03b-13db05085d41","Type":"ContainerStarted","Data":"4a75ad9e81d987662f6d439402d58e63420f7818d17550b22161b78009b8c1c6"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.823461 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" event={"ID":"7c34e714-0f18-4e41-ab9c-1dfe4859e644","Type":"ContainerStarted","Data":"c9b7879953162331770b5c3c1b2734204ffdaae76e50a6aba51675f4d73acdd4"} Jan 30 14:02:54 crc kubenswrapper[4793]: I0130 14:02:54.825472 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" event={"ID":"ce9be14f-8255-421e-91b4-a30fc5482ff4","Type":"ContainerStarted","Data":"5264eb0e11dbbccbf6732042af23f0fc227036f40a41883b2873b0ef8a50b4ce"} Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.830313 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b7kmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-vxhpt_openstack-operators(3eb94c51-d506-4273-898b-dba537cabea6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 14:02:54 crc kubenswrapper[4793]: E0130 14:02:54.831551 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" podUID="3eb94c51-d506-4273-898b-dba537cabea6" Jan 30 14:02:55 crc kubenswrapper[4793]: I0130 14:02:55.196701 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:55 crc kubenswrapper[4793]: I0130 14:02:55.197122 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.197251 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.197295 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:02:57.197282927 +0000 UTC m=+1187.898631408 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "metrics-server-cert" not found Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.197337 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.197368 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:02:57.197350449 +0000 UTC m=+1187.898698940 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "webhook-server-cert" not found Jan 30 14:02:55 crc kubenswrapper[4793]: I0130 14:02:55.838852 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" event={"ID":"5e215cef-de14-424d-9028-a48bad979192","Type":"ContainerStarted","Data":"27ffa0b55c7fffa9a10f3884e6cd74d7d8dd8a29eb7e3983dc9aa667aa6653d5"} Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.841983 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" podUID="5e215cef-de14-424d-9028-a48bad979192" Jan 30 14:02:55 crc kubenswrapper[4793]: I0130 14:02:55.844846 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" event={"ID":"f65e9448-ee4e-4f22-9bd7-ecf650cb36b5","Type":"ContainerStarted","Data":"4d52683ce83ecfbefd34cf10d049265d36a877ab8e75a0c32263780971962732"} Jan 30 14:02:55 crc kubenswrapper[4793]: I0130 14:02:55.847106 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" event={"ID":"3eb94c51-d506-4273-898b-dba537cabea6","Type":"ContainerStarted","Data":"48cc938651f3825f9d91039109cb9b855313e932e22358a8ed5ec945990d8ce6"} Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.847388 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" podUID="f65e9448-ee4e-4f22-9bd7-ecf650cb36b5" Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.864402 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" podUID="2aae677d-830b-44b8-a792-3d0b527aee89" Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.864462 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" podUID="6231ed92-57a8-4c48-9c75-e916940b22ea" Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.868069 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" podUID="3eb94c51-d506-4273-898b-dba537cabea6" Jan 30 14:02:55 crc kubenswrapper[4793]: E0130 14:02:55.868103 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" podUID="6b21b0ca-d506-4b1b-b6e1-06e2a96ae033" Jan 30 14:02:56 crc kubenswrapper[4793]: I0130 14:02:56.222025 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:02:56 crc kubenswrapper[4793]: E0130 14:02:56.222190 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 14:02:56 crc kubenswrapper[4793]: E0130 14:02:56.222283 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert podName:97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642 nodeName:}" failed. No retries permitted until 2026-01-30 14:03:00.222264618 +0000 UTC m=+1190.923613169 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert") pod "infra-operator-controller-manager-79955696d6-khfs7" (UID: "97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642") : secret "infra-operator-webhook-server-cert" not found Jan 30 14:02:56 crc kubenswrapper[4793]: I0130 14:02:56.730590 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:02:56 crc kubenswrapper[4793]: E0130 14:02:56.730795 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:02:56 crc kubenswrapper[4793]: E0130 14:02:56.731099 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert podName:e446e97c-6e9f-4dc2-b5fd-fb63451fd326 nodeName:}" failed. No retries permitted until 2026-01-30 14:03:00.731075415 +0000 UTC m=+1191.432423906 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" (UID: "e446e97c-6e9f-4dc2-b5fd-fb63451fd326") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:02:56 crc kubenswrapper[4793]: E0130 14:02:56.886632 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" podUID="3eb94c51-d506-4273-898b-dba537cabea6" Jan 30 14:02:56 crc kubenswrapper[4793]: E0130 14:02:56.886772 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" podUID="5e215cef-de14-424d-9028-a48bad979192" Jan 30 14:02:56 crc kubenswrapper[4793]: E0130 14:02:56.887029 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" podUID="f65e9448-ee4e-4f22-9bd7-ecf650cb36b5" Jan 30 14:02:57 crc kubenswrapper[4793]: I0130 14:02:57.238565 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:57 crc kubenswrapper[4793]: I0130 14:02:57.238629 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:02:57 crc kubenswrapper[4793]: E0130 14:02:57.238765 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 14:02:57 crc kubenswrapper[4793]: E0130 14:02:57.238806 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 14:02:57 crc kubenswrapper[4793]: E0130 14:02:57.238855 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:03:01.238833587 +0000 UTC m=+1191.940182068 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "webhook-server-cert" not found Jan 30 14:02:57 crc kubenswrapper[4793]: E0130 14:02:57.238874 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:03:01.238868128 +0000 UTC m=+1191.940216619 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "metrics-server-cert" not found Jan 30 14:03:00 crc kubenswrapper[4793]: I0130 14:03:00.315820 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:03:00 crc kubenswrapper[4793]: E0130 14:03:00.315997 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 14:03:00 crc kubenswrapper[4793]: E0130 14:03:00.316264 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert podName:97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642 nodeName:}" failed. No retries permitted until 2026-01-30 14:03:08.316243179 +0000 UTC m=+1199.017591680 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert") pod "infra-operator-controller-manager-79955696d6-khfs7" (UID: "97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642") : secret "infra-operator-webhook-server-cert" not found Jan 30 14:03:00 crc kubenswrapper[4793]: I0130 14:03:00.822387 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:03:00 crc kubenswrapper[4793]: E0130 14:03:00.822593 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:03:00 crc kubenswrapper[4793]: E0130 14:03:00.822730 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert podName:e446e97c-6e9f-4dc2-b5fd-fb63451fd326 nodeName:}" failed. No retries permitted until 2026-01-30 14:03:08.82269789 +0000 UTC m=+1199.524046422 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" (UID: "e446e97c-6e9f-4dc2-b5fd-fb63451fd326") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:03:01 crc kubenswrapper[4793]: I0130 14:03:01.328987 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:01 crc kubenswrapper[4793]: I0130 14:03:01.329161 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:01 crc kubenswrapper[4793]: E0130 14:03:01.329170 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 14:03:01 crc kubenswrapper[4793]: E0130 14:03:01.329358 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:03:09.329325215 +0000 UTC m=+1200.030673746 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "webhook-server-cert" not found Jan 30 14:03:01 crc kubenswrapper[4793]: E0130 14:03:01.329215 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 14:03:01 crc kubenswrapper[4793]: E0130 14:03:01.329482 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:03:09.329451948 +0000 UTC m=+1200.030800439 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "metrics-server-cert" not found Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.098998 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.099748 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pdkv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-v77jx_openstack-operators(7c34e714-0f18-4e41-ab9c-1dfe4859e644): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.100924 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" podUID="7c34e714-0f18-4e41-ab9c-1dfe4859e644" Jan 30 14:03:08 crc kubenswrapper[4793]: I0130 14:03:08.335175 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.335379 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.335499 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert podName:97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642 nodeName:}" failed. No retries permitted until 2026-01-30 14:03:24.335479761 +0000 UTC m=+1215.036828302 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert") pod "infra-operator-controller-manager-79955696d6-khfs7" (UID: "97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642") : secret "infra-operator-webhook-server-cert" not found Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.648907 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898" Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.649466 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l8bkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-8d874c8fc-9kwwr_openstack-operators(8835e5d9-c37d-4744-95cb-c56c10a58647): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.651368 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" podUID="8835e5d9-c37d-4744-95cb-c56c10a58647" Jan 30 14:03:08 crc kubenswrapper[4793]: I0130 14:03:08.846146 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.846340 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.846413 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert podName:e446e97c-6e9f-4dc2-b5fd-fb63451fd326 nodeName:}" failed. No retries permitted until 2026-01-30 14:03:24.846391498 +0000 UTC m=+1215.547739979 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" (UID: "e446e97c-6e9f-4dc2-b5fd-fb63451fd326") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.951931 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" podUID="8835e5d9-c37d-4744-95cb-c56c10a58647" Jan 30 14:03:08 crc kubenswrapper[4793]: E0130 14:03:08.951836 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" podUID="7c34e714-0f18-4e41-ab9c-1dfe4859e644" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.243374 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.243662 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6kkgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-x6pk6_openstack-operators(05415bc7-22dc-4b15-a047-6ed62755638d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.244869 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" podUID="05415bc7-22dc-4b15-a047-6ed62755638d" Jan 30 14:03:09 crc kubenswrapper[4793]: I0130 14:03:09.352922 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:09 crc kubenswrapper[4793]: I0130 14:03:09.352997 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.353360 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.353419 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs podName:e9854850-e645-4364-a471-bef994f8536c nodeName:}" failed. No retries permitted until 2026-01-30 14:03:25.353403393 +0000 UTC m=+1216.054751884 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs") pod "openstack-operator-controller-manager-75c5857d49-pm446" (UID: "e9854850-e645-4364-a471-bef994f8536c") : secret "webhook-server-cert" not found Jan 30 14:03:09 crc kubenswrapper[4793]: I0130 14:03:09.371280 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-metrics-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.803179 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.803452 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jmk66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-27flx_openstack-operators(02b8e60c-3514-4d72-bde6-5af374a926b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.805730 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" podUID="02b8e60c-3514-4d72-bde6-5af374a926b1" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.957264 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" podUID="05415bc7-22dc-4b15-a047-6ed62755638d" Jan 30 14:03:09 crc kubenswrapper[4793]: E0130 14:03:09.957671 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" podUID="02b8e60c-3514-4d72-bde6-5af374a926b1" Jan 30 14:03:11 crc kubenswrapper[4793]: E0130 14:03:11.798483 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Jan 30 14:03:11 crc kubenswrapper[4793]: E0130 14:03:11.799178 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wmpv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-hjpkr_openstack-operators(6f991e04-2db3-4b32-bc83-8bbce4ce7a08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:11 crc kubenswrapper[4793]: E0130 14:03:11.801580 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" podUID="6f991e04-2db3-4b32-bc83-8bbce4ce7a08" Jan 30 14:03:11 crc kubenswrapper[4793]: E0130 14:03:11.969767 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" podUID="6f991e04-2db3-4b32-bc83-8bbce4ce7a08" Jan 30 14:03:12 crc kubenswrapper[4793]: I0130 14:03:12.413541 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:03:12 crc kubenswrapper[4793]: I0130 14:03:12.413606 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:03:12 crc kubenswrapper[4793]: I0130 14:03:12.413646 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:03:12 crc kubenswrapper[4793]: I0130 14:03:12.414336 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d2487d42ac1676516749d1fe7d34e7f815543009b077aded1798d3fcce33e28"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:03:12 crc kubenswrapper[4793]: I0130 14:03:12.414396 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://2d2487d42ac1676516749d1fe7d34e7f815543009b077aded1798d3fcce33e28" gracePeriod=600 Jan 30 14:03:12 crc kubenswrapper[4793]: I0130 14:03:12.976137 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="2d2487d42ac1676516749d1fe7d34e7f815543009b077aded1798d3fcce33e28" exitCode=0 Jan 30 14:03:12 crc kubenswrapper[4793]: I0130 14:03:12.976192 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"2d2487d42ac1676516749d1fe7d34e7f815543009b077aded1798d3fcce33e28"} Jan 30 14:03:12 crc kubenswrapper[4793]: I0130 14:03:12.976230 4793 scope.go:117] "RemoveContainer" containerID="a70290c8d43e76215d2545599390db044bcef74601c3ab38a37df4fc1393ebad" Jan 30 14:03:15 crc kubenswrapper[4793]: E0130 14:03:15.296132 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 30 14:03:15 crc kubenswrapper[4793]: E0130 14:03:15.296776 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t7wj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-n29l5_openstack-operators(fa88d14c-0581-439c-9da1-f1123e41a65a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:15 crc kubenswrapper[4793]: E0130 14:03:15.298116 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" podUID="fa88d14c-0581-439c-9da1-f1123e41a65a" Jan 30 14:03:15 crc kubenswrapper[4793]: E0130 14:03:15.863798 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4" Jan 30 14:03:15 crc kubenswrapper[4793]: E0130 14:03:15.863973 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jl4hd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-8886f4c47-g5848_openstack-operators(1d859404-a29c-46c9-b66a-fed5ff0b13f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:15 crc kubenswrapper[4793]: E0130 14:03:15.865931 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" podUID="1d859404-a29c-46c9-b66a-fed5ff0b13f0" Jan 30 14:03:15 crc kubenswrapper[4793]: E0130 14:03:15.995919 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" podUID="fa88d14c-0581-439c-9da1-f1123e41a65a" Jan 30 14:03:15 crc kubenswrapper[4793]: E0130 14:03:15.997524 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" podUID="1d859404-a29c-46c9-b66a-fed5ff0b13f0" Jan 30 14:03:17 crc kubenswrapper[4793]: E0130 14:03:17.341618 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 30 14:03:17 crc kubenswrapper[4793]: E0130 14:03:17.342105 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n66zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-9ftxd_openstack-operators(ce9be14f-8255-421e-91b4-a30fc5482ff4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:17 crc kubenswrapper[4793]: E0130 14:03:17.343836 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" podUID="ce9be14f-8255-421e-91b4-a30fc5482ff4" Jan 30 14:03:17 crc kubenswrapper[4793]: E0130 14:03:17.877971 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 30 14:03:17 crc kubenswrapper[4793]: E0130 14:03:17.878225 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qctdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-82cvq_openstack-operators(bdcd04f7-09fa-4b1b-8b99-3de61a28a337): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:17 crc kubenswrapper[4793]: E0130 14:03:17.879961 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" podUID="bdcd04f7-09fa-4b1b-8b99-3de61a28a337" Jan 30 14:03:18 crc kubenswrapper[4793]: E0130 14:03:18.007348 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" podUID="ce9be14f-8255-421e-91b4-a30fc5482ff4" Jan 30 14:03:18 crc kubenswrapper[4793]: E0130 14:03:18.007540 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" podUID="bdcd04f7-09fa-4b1b-8b99-3de61a28a337" Jan 30 14:03:18 crc kubenswrapper[4793]: E0130 14:03:18.405803 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 30 14:03:18 crc kubenswrapper[4793]: E0130 14:03:18.405968 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zntmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-vtx9d_openstack-operators(31ca6ac1-d2da-4325-baa4-e18fc3514721): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:03:18 crc kubenswrapper[4793]: E0130 14:03:18.407173 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" podUID="31ca6ac1-d2da-4325-baa4-e18fc3514721" Jan 30 14:03:19 crc kubenswrapper[4793]: E0130 14:03:19.013370 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" podUID="31ca6ac1-d2da-4325-baa4-e18fc3514721" Jan 30 14:03:24 crc kubenswrapper[4793]: I0130 14:03:24.419967 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:03:24 crc kubenswrapper[4793]: I0130 14:03:24.426425 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642-cert\") pod \"infra-operator-controller-manager-79955696d6-khfs7\" (UID: \"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:03:24 crc kubenswrapper[4793]: I0130 14:03:24.541257 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ct9pn" Jan 30 14:03:24 crc kubenswrapper[4793]: I0130 14:03:24.550537 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:03:24 crc kubenswrapper[4793]: I0130 14:03:24.926501 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:03:24 crc kubenswrapper[4793]: I0130 14:03:24.943199 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e446e97c-6e9f-4dc2-b5fd-fb63451fd326-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs\" (UID: \"e446e97c-6e9f-4dc2-b5fd-fb63451fd326\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:03:25 crc kubenswrapper[4793]: I0130 14:03:25.142610 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-spknc" Jan 30 14:03:25 crc kubenswrapper[4793]: I0130 14:03:25.151531 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:03:25 crc kubenswrapper[4793]: I0130 14:03:25.433442 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:25 crc kubenswrapper[4793]: I0130 14:03:25.437379 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9854850-e645-4364-a471-bef994f8536c-webhook-certs\") pod \"openstack-operator-controller-manager-75c5857d49-pm446\" (UID: \"e9854850-e645-4364-a471-bef994f8536c\") " pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:25 crc kubenswrapper[4793]: I0130 14:03:25.682025 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-95jx4" Jan 30 14:03:25 crc kubenswrapper[4793]: I0130 14:03:25.690717 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:28 crc kubenswrapper[4793]: I0130 14:03:28.422881 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446"] Jan 30 14:03:28 crc kubenswrapper[4793]: I0130 14:03:28.542590 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-khfs7"] Jan 30 14:03:28 crc kubenswrapper[4793]: W0130 14:03:28.578280 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9854850_e645_4364_a471_bef994f8536c.slice/crio-b4a19cadd40eb82e5a0bd838b68df7d7a74c2b6eca30738e588accba5dbfe4dc WatchSource:0}: Error finding container b4a19cadd40eb82e5a0bd838b68df7d7a74c2b6eca30738e588accba5dbfe4dc: Status 404 returned error can't find the container with id b4a19cadd40eb82e5a0bd838b68df7d7a74c2b6eca30738e588accba5dbfe4dc Jan 30 14:03:28 crc kubenswrapper[4793]: I0130 14:03:28.670400 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs"] Jan 30 14:03:28 crc kubenswrapper[4793]: W0130 14:03:28.766849 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode446e97c_6e9f_4dc2_b5fd_fb63451fd326.slice/crio-bae18ca5aa3b83a765daaac6a1480da665ff7a0367f0f791d1d2547b42a5e94f WatchSource:0}: Error finding container bae18ca5aa3b83a765daaac6a1480da665ff7a0367f0f791d1d2547b42a5e94f: Status 404 returned error can't find the container with id bae18ca5aa3b83a765daaac6a1480da665ff7a0367f0f791d1d2547b42a5e94f Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.112089 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" event={"ID":"710c57e4-a09e-4db1-a03b-13db05085d41","Type":"ContainerStarted","Data":"6d08b8f8d51f12a15ce91448e8d9f2a4814c5e254c97b37b448a077769d1a560"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.112220 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.114169 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" event={"ID":"6f991e04-2db3-4b32-bc83-8bbce4ce7a08","Type":"ContainerStarted","Data":"5d58d9cb51b15256753293ae92c1997066479a155769973e25ce2cebf51cc9d1"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.114305 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.118078 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" event={"ID":"e446e97c-6e9f-4dc2-b5fd-fb63451fd326","Type":"ContainerStarted","Data":"bae18ca5aa3b83a765daaac6a1480da665ff7a0367f0f791d1d2547b42a5e94f"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.123693 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" event={"ID":"2aae677d-830b-44b8-a792-3d0b527aee89","Type":"ContainerStarted","Data":"293dcbb0b62f2f73a14860453e3edc835f536be4bed5bf16cff006627cc9c8b3"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.125409 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" event={"ID":"7c34e714-0f18-4e41-ab9c-1dfe4859e644","Type":"ContainerStarted","Data":"4b67acb08e34346b47114952d4c9d43251b624fa74d9feed65156034c775e72f"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.125976 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.129614 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" event={"ID":"6231ed92-57a8-4c48-9c75-e916940b22ea","Type":"ContainerStarted","Data":"1cd04c4391e1aa64f2f8d19c195ecc4ea1893b517242f6600a0448557e5b3aef"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.129961 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.142331 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" event={"ID":"53576ec8-2f6d-4781-8906-726529cc6049","Type":"ContainerStarted","Data":"4498d2a99f62f450fb0ee6f1eeb7e64c106e8ce8c79acd314b0b7fe2c691718f"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.142419 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.147597 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" event={"ID":"6b21b0ca-d506-4b1b-b6e1-06e2a96ae033","Type":"ContainerStarted","Data":"a1e488365e9baeba1abff0c1b1ae3300c6079d75f704730ad6b738a785a519bc"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.147973 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.154082 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" event={"ID":"ec981da4-a3ba-4e4e-a0eb-2168ab79fe77","Type":"ContainerStarted","Data":"3f032202705eb4d294a11cd1aaa16cacaf5ea769d8ca352c5ded6dbdd7b47465"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.154175 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.165619 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" podStartSLOduration=12.097361437 podStartE2EDuration="37.165601076s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:53.865164979 +0000 UTC m=+1184.566513470" lastFinishedPulling="2026-01-30 14:03:18.933404598 +0000 UTC m=+1209.634753109" observedRunningTime="2026-01-30 14:03:29.157327149 +0000 UTC m=+1219.858675650" watchObservedRunningTime="2026-01-30 14:03:29.165601076 +0000 UTC m=+1219.866949567" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.165650 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" event={"ID":"8d24cd33-2902-424a-8ffc-76b1e4c2f482","Type":"ContainerStarted","Data":"7a4840128de67007bc3089340f7bda4d74cb43411b5799584659144d01f54d2d"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.166521 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.184042 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"f37b4adcd989135b3a0199183c5b09641f48fc83f250e8154636cac5c1ad21e6"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.198903 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" event={"ID":"02b8e60c-3514-4d72-bde6-5af374a926b1","Type":"ContainerStarted","Data":"322e117348d537a97afb0fe3e60f32a7b2ddc9b3913e2e54e9a4fcb830fd8e87"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.203370 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.207481 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb4g2" podStartSLOduration=2.940212422 podStartE2EDuration="36.207463599s" podCreationTimestamp="2026-01-30 14:02:53 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.719916503 +0000 UTC m=+1185.421264984" lastFinishedPulling="2026-01-30 14:03:27.98716767 +0000 UTC m=+1218.688516161" observedRunningTime="2026-01-30 14:03:29.190524363 +0000 UTC m=+1219.891872874" watchObservedRunningTime="2026-01-30 14:03:29.207463599 +0000 UTC m=+1219.908812090" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.227345 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" event={"ID":"5e215cef-de14-424d-9028-a48bad979192","Type":"ContainerStarted","Data":"20398ace5d623f7d4eb3a8e0b37021d7885d43d4210688d410a6a7ae44ebd035"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.227982 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.228901 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" event={"ID":"e9854850-e645-4364-a471-bef994f8536c","Type":"ContainerStarted","Data":"b4a19cadd40eb82e5a0bd838b68df7d7a74c2b6eca30738e588accba5dbfe4dc"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.234345 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" event={"ID":"3eb94c51-d506-4273-898b-dba537cabea6","Type":"ContainerStarted","Data":"f1bfeef5977d2bf323ff2e676f330bfe179b896f579fdc70d159507c0d75fa2c"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.235036 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.247089 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" podStartSLOduration=3.863579387 podStartE2EDuration="37.247073008s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.58535947 +0000 UTC m=+1185.286707961" lastFinishedPulling="2026-01-30 14:03:27.968853091 +0000 UTC m=+1218.670201582" observedRunningTime="2026-01-30 14:03:29.243518313 +0000 UTC m=+1219.944866794" watchObservedRunningTime="2026-01-30 14:03:29.247073008 +0000 UTC m=+1219.948421489" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.250362 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" event={"ID":"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642","Type":"ContainerStarted","Data":"5b9a9eec655e99bf4a5a92b43436e7d40ce0b2fd269fc5e49ce02f9134364010"} Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.280699 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" podStartSLOduration=3.912459948 podStartE2EDuration="37.280675313s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.588188528 +0000 UTC m=+1185.289537019" lastFinishedPulling="2026-01-30 14:03:27.956403893 +0000 UTC m=+1218.657752384" observedRunningTime="2026-01-30 14:03:29.267592199 +0000 UTC m=+1219.968940690" watchObservedRunningTime="2026-01-30 14:03:29.280675313 +0000 UTC m=+1219.982023804" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.318882 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" podStartSLOduration=13.053739985 podStartE2EDuration="37.318866028s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.669990547 +0000 UTC m=+1185.371339038" lastFinishedPulling="2026-01-30 14:03:18.93511659 +0000 UTC m=+1209.636465081" observedRunningTime="2026-01-30 14:03:29.314398491 +0000 UTC m=+1220.015746982" watchObservedRunningTime="2026-01-30 14:03:29.318866028 +0000 UTC m=+1220.020214519" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.347408 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" podStartSLOduration=12.294179871 podStartE2EDuration="37.34738931s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:53.878778435 +0000 UTC m=+1184.580126926" lastFinishedPulling="2026-01-30 14:03:18.931987864 +0000 UTC m=+1209.633336365" observedRunningTime="2026-01-30 14:03:29.346571661 +0000 UTC m=+1220.047920162" watchObservedRunningTime="2026-01-30 14:03:29.34738931 +0000 UTC m=+1220.048737801" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.370821 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" podStartSLOduration=4.266422646 podStartE2EDuration="37.370799601s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.758898007 +0000 UTC m=+1185.460246498" lastFinishedPulling="2026-01-30 14:03:27.863274962 +0000 UTC m=+1218.564623453" observedRunningTime="2026-01-30 14:03:29.368799834 +0000 UTC m=+1220.070148335" watchObservedRunningTime="2026-01-30 14:03:29.370799601 +0000 UTC m=+1220.072148092" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.396977 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" podStartSLOduration=4.235441384 podStartE2EDuration="37.396958058s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.720005075 +0000 UTC m=+1185.421353566" lastFinishedPulling="2026-01-30 14:03:27.881521749 +0000 UTC m=+1218.582870240" observedRunningTime="2026-01-30 14:03:29.391285442 +0000 UTC m=+1220.092633943" watchObservedRunningTime="2026-01-30 14:03:29.396958058 +0000 UTC m=+1220.098306549" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.427014 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" podStartSLOduration=4.384095064 podStartE2EDuration="37.426999187s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.830210465 +0000 UTC m=+1185.531558956" lastFinishedPulling="2026-01-30 14:03:27.873114548 +0000 UTC m=+1218.574463079" observedRunningTime="2026-01-30 14:03:29.425605504 +0000 UTC m=+1220.126953995" watchObservedRunningTime="2026-01-30 14:03:29.426999187 +0000 UTC m=+1220.128347678" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.469450 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" podStartSLOduration=4.379077993 podStartE2EDuration="37.469434624s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.762630915 +0000 UTC m=+1185.463979406" lastFinishedPulling="2026-01-30 14:03:27.852987546 +0000 UTC m=+1218.554336037" observedRunningTime="2026-01-30 14:03:29.467293503 +0000 UTC m=+1220.168641994" watchObservedRunningTime="2026-01-30 14:03:29.469434624 +0000 UTC m=+1220.170783115" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.557896 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" podStartSLOduration=13.207355674 podStartE2EDuration="37.557879143s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.584950949 +0000 UTC m=+1185.286299440" lastFinishedPulling="2026-01-30 14:03:18.935474398 +0000 UTC m=+1209.636822909" observedRunningTime="2026-01-30 14:03:29.537223838 +0000 UTC m=+1220.238572339" watchObservedRunningTime="2026-01-30 14:03:29.557879143 +0000 UTC m=+1220.259227634" Jan 30 14:03:29 crc kubenswrapper[4793]: I0130 14:03:29.656271 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" podStartSLOduration=4.393487299 podStartE2EDuration="37.656256119s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.694007602 +0000 UTC m=+1185.395356093" lastFinishedPulling="2026-01-30 14:03:27.956776412 +0000 UTC m=+1218.658124913" observedRunningTime="2026-01-30 14:03:29.621715171 +0000 UTC m=+1220.323063662" watchObservedRunningTime="2026-01-30 14:03:29.656256119 +0000 UTC m=+1220.357604610" Jan 30 14:03:30 crc kubenswrapper[4793]: I0130 14:03:30.256980 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" event={"ID":"1d859404-a29c-46c9-b66a-fed5ff0b13f0","Type":"ContainerStarted","Data":"ac3e067efe5c5da02b8fb97811c39920d5020a10f369eeb121a52a4572239128"} Jan 30 14:03:30 crc kubenswrapper[4793]: I0130 14:03:30.257963 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" Jan 30 14:03:30 crc kubenswrapper[4793]: I0130 14:03:30.259129 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" event={"ID":"f65e9448-ee4e-4f22-9bd7-ecf650cb36b5","Type":"ContainerStarted","Data":"b0d4b39b0f9cecd59cb0720b242941b5b172ab8b965299f045f58c98b9fe743e"} Jan 30 14:03:30 crc kubenswrapper[4793]: I0130 14:03:30.260569 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" event={"ID":"e9854850-e645-4364-a471-bef994f8536c","Type":"ContainerStarted","Data":"daa484d5ca82becb56802bcf64a76f541e659963aef9603cb9dac6d4d9db7698"} Jan 30 14:03:30 crc kubenswrapper[4793]: I0130 14:03:30.294319 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" podStartSLOduration=4.217579076 podStartE2EDuration="38.294301612s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.079826051 +0000 UTC m=+1184.781174542" lastFinishedPulling="2026-01-30 14:03:28.156548587 +0000 UTC m=+1218.857897078" observedRunningTime="2026-01-30 14:03:30.292477948 +0000 UTC m=+1220.993826439" watchObservedRunningTime="2026-01-30 14:03:30.294301612 +0000 UTC m=+1220.995650103" Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.273134 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" event={"ID":"05415bc7-22dc-4b15-a047-6ed62755638d","Type":"ContainerStarted","Data":"92185437c26d53f7e6a0c77384511c8172fbbe61eb0097a8737beb22aac455a0"} Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.273503 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.275299 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" event={"ID":"8835e5d9-c37d-4744-95cb-c56c10a58647","Type":"ContainerStarted","Data":"ce279e3fb363f3026da51a6ba412e86078297a583fc64a03f049c53e8f30d9e2"} Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.275325 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.275972 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.276390 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.290600 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" podStartSLOduration=6.027461367 podStartE2EDuration="39.290580945s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.706242675 +0000 UTC m=+1185.407591166" lastFinishedPulling="2026-01-30 14:03:27.969362253 +0000 UTC m=+1218.670710744" observedRunningTime="2026-01-30 14:03:31.289396147 +0000 UTC m=+1221.990744638" watchObservedRunningTime="2026-01-30 14:03:31.290580945 +0000 UTC m=+1221.991929436" Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.342268 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" podStartSLOduration=38.342242243 podStartE2EDuration="38.342242243s" podCreationTimestamp="2026-01-30 14:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:03:31.335964363 +0000 UTC m=+1222.037312864" watchObservedRunningTime="2026-01-30 14:03:31.342242243 +0000 UTC m=+1222.043590734" Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.365847 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" podStartSLOduration=6.166286772 podStartE2EDuration="39.365828358s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.768392333 +0000 UTC m=+1185.469740824" lastFinishedPulling="2026-01-30 14:03:27.967933919 +0000 UTC m=+1218.669282410" observedRunningTime="2026-01-30 14:03:31.361641717 +0000 UTC m=+1222.062990198" watchObservedRunningTime="2026-01-30 14:03:31.365828358 +0000 UTC m=+1222.067176839" Jan 30 14:03:31 crc kubenswrapper[4793]: I0130 14:03:31.381911 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" podStartSLOduration=5.292979943 podStartE2EDuration="39.381895852s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:53.879091732 +0000 UTC m=+1184.580440223" lastFinishedPulling="2026-01-30 14:03:27.968007641 +0000 UTC m=+1218.669356132" observedRunningTime="2026-01-30 14:03:31.379812423 +0000 UTC m=+1222.081160914" watchObservedRunningTime="2026-01-30 14:03:31.381895852 +0000 UTC m=+1222.083244343" Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.283113 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" event={"ID":"fa88d14c-0581-439c-9da1-f1123e41a65a","Type":"ContainerStarted","Data":"c5d8ae934a12d94beb722222e25e7718bca78238f52645c316e13a698f5d4cdb"} Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.283345 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.285547 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" event={"ID":"31ca6ac1-d2da-4325-baa4-e18fc3514721","Type":"ContainerStarted","Data":"131a8866ed381dbfacdfaee2b04e7ec69858d1bcb03c1fcf1fcd221966f702f5"} Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.285731 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.287716 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" event={"ID":"ce9be14f-8255-421e-91b4-a30fc5482ff4","Type":"ContainerStarted","Data":"7fa904291b57f7502f2f4c58d66ceb0ac545053075d9a12e008340630dd1df71"} Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.288135 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.316278 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" podStartSLOduration=3.9605311690000002 podStartE2EDuration="40.316262543s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.664023334 +0000 UTC m=+1185.365371825" lastFinishedPulling="2026-01-30 14:03:31.019754708 +0000 UTC m=+1221.721103199" observedRunningTime="2026-01-30 14:03:32.300539226 +0000 UTC m=+1223.001887717" watchObservedRunningTime="2026-01-30 14:03:32.316262543 +0000 UTC m=+1223.017611034" Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.319189 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" podStartSLOduration=3.994090952 podStartE2EDuration="40.319173433s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.684948955 +0000 UTC m=+1185.386297446" lastFinishedPulling="2026-01-30 14:03:31.010031436 +0000 UTC m=+1221.711379927" observedRunningTime="2026-01-30 14:03:32.313486207 +0000 UTC m=+1223.014834708" watchObservedRunningTime="2026-01-30 14:03:32.319173433 +0000 UTC m=+1223.020521924" Jan 30 14:03:32 crc kubenswrapper[4793]: I0130 14:03:32.419682 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" podStartSLOduration=3.80443487 podStartE2EDuration="40.41966551s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.585605986 +0000 UTC m=+1185.286954477" lastFinishedPulling="2026-01-30 14:03:31.200836626 +0000 UTC m=+1221.902185117" observedRunningTime="2026-01-30 14:03:32.336440306 +0000 UTC m=+1223.037788817" watchObservedRunningTime="2026-01-30 14:03:32.41966551 +0000 UTC m=+1223.121014001" Jan 30 14:03:33 crc kubenswrapper[4793]: I0130 14:03:33.312811 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5nsr4" Jan 30 14:03:33 crc kubenswrapper[4793]: I0130 14:03:33.383644 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-qb5xp" Jan 30 14:03:33 crc kubenswrapper[4793]: I0130 14:03:33.414003 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-27flx" Jan 30 14:03:33 crc kubenswrapper[4793]: I0130 14:03:33.586816 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-tv5vr" Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.301566 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" event={"ID":"bdcd04f7-09fa-4b1b-8b99-3de61a28a337","Type":"ContainerStarted","Data":"ae8edf990d7d598da8c49027fd0c9141b51d72ee143023d81d0e02cb56137363"} Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.302149 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.303194 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" event={"ID":"e446e97c-6e9f-4dc2-b5fd-fb63451fd326","Type":"ContainerStarted","Data":"4ff604a61e6addc102a2634f52536b0ff351a12eebda7c87d40b6e2cfbb568d5"} Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.303311 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.306679 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" event={"ID":"97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642","Type":"ContainerStarted","Data":"dc003551a07fbe409c0c00ed6b2229783c5b4a02ab68f9eb38c1157364077279"} Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.306912 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.321634 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" podStartSLOduration=3.008003643 podStartE2EDuration="42.321617216s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:02:54.674222188 +0000 UTC m=+1185.375570679" lastFinishedPulling="2026-01-30 14:03:33.987835761 +0000 UTC m=+1224.689184252" observedRunningTime="2026-01-30 14:03:34.316686237 +0000 UTC m=+1225.018034738" watchObservedRunningTime="2026-01-30 14:03:34.321617216 +0000 UTC m=+1225.022965707" Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.350766 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" podStartSLOduration=37.142436782 podStartE2EDuration="42.350751714s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:03:28.778167377 +0000 UTC m=+1219.479515868" lastFinishedPulling="2026-01-30 14:03:33.986482289 +0000 UTC m=+1224.687830800" observedRunningTime="2026-01-30 14:03:34.345789035 +0000 UTC m=+1225.047137526" watchObservedRunningTime="2026-01-30 14:03:34.350751714 +0000 UTC m=+1225.052100205" Jan 30 14:03:34 crc kubenswrapper[4793]: I0130 14:03:34.373186 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" podStartSLOduration=37.001625258 podStartE2EDuration="42.373170941s" podCreationTimestamp="2026-01-30 14:02:52 +0000 UTC" firstStartedPulling="2026-01-30 14:03:28.615070729 +0000 UTC m=+1219.316419220" lastFinishedPulling="2026-01-30 14:03:33.986616392 +0000 UTC m=+1224.687964903" observedRunningTime="2026-01-30 14:03:34.367542146 +0000 UTC m=+1225.068890637" watchObservedRunningTime="2026-01-30 14:03:34.373170941 +0000 UTC m=+1225.074519432" Jan 30 14:03:35 crc kubenswrapper[4793]: I0130 14:03:35.697248 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-75c5857d49-pm446" Jan 30 14:03:42 crc kubenswrapper[4793]: I0130 14:03:42.558569 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8bg6c" Jan 30 14:03:42 crc kubenswrapper[4793]: I0130 14:03:42.581719 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9kwwr" Jan 30 14:03:42 crc kubenswrapper[4793]: I0130 14:03:42.632674 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-hjpkr" Jan 30 14:03:42 crc kubenswrapper[4793]: I0130 14:03:42.639754 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-g5848" Jan 30 14:03:42 crc kubenswrapper[4793]: I0130 14:03:42.712492 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-m4q78" Jan 30 14:03:42 crc kubenswrapper[4793]: I0130 14:03:42.972926 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k4tz9" Jan 30 14:03:42 crc kubenswrapper[4793]: I0130 14:03:42.993568 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v77jx" Jan 30 14:03:43 crc kubenswrapper[4793]: I0130 14:03:43.063258 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-82cvq" Jan 30 14:03:43 crc kubenswrapper[4793]: I0130 14:03:43.110621 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-9ftxd" Jan 30 14:03:43 crc kubenswrapper[4793]: I0130 14:03:43.231272 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n29l5" Jan 30 14:03:43 crc kubenswrapper[4793]: I0130 14:03:43.231547 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x6pk6" Jan 30 14:03:43 crc kubenswrapper[4793]: I0130 14:03:43.233444 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vtx9d" Jan 30 14:03:43 crc kubenswrapper[4793]: I0130 14:03:43.334695 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4ml88" Jan 30 14:03:43 crc kubenswrapper[4793]: I0130 14:03:43.394968 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxhpt" Jan 30 14:03:43 crc kubenswrapper[4793]: I0130 14:03:43.695642 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-btjpp" Jan 30 14:03:44 crc kubenswrapper[4793]: I0130 14:03:44.556797 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-khfs7" Jan 30 14:03:45 crc kubenswrapper[4793]: I0130 14:03:45.159430 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.432910 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tngjn"] Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.434391 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.439942 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-nksbk" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.440221 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.440374 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.440481 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.444143 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tngjn"] Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.508587 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qtp9b"] Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.509781 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.513587 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.514227 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-278cb\" (UniqueName: \"kubernetes.io/projected/ea64ca1b-5302-40cc-9918-810b75c36240-kube-api-access-278cb\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.514293 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-config\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.514315 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xvlt\" (UniqueName: \"kubernetes.io/projected/a6047db8-60b6-4b1d-94d0-9934475fb39e-kube-api-access-8xvlt\") pod \"dnsmasq-dns-675f4bcbfc-tngjn\" (UID: \"a6047db8-60b6-4b1d-94d0-9934475fb39e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.514403 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.514440 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6047db8-60b6-4b1d-94d0-9934475fb39e-config\") pod \"dnsmasq-dns-675f4bcbfc-tngjn\" (UID: \"a6047db8-60b6-4b1d-94d0-9934475fb39e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.523758 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qtp9b"] Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.615946 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-278cb\" (UniqueName: \"kubernetes.io/projected/ea64ca1b-5302-40cc-9918-810b75c36240-kube-api-access-278cb\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.616002 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-config\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.616026 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xvlt\" (UniqueName: \"kubernetes.io/projected/a6047db8-60b6-4b1d-94d0-9934475fb39e-kube-api-access-8xvlt\") pod \"dnsmasq-dns-675f4bcbfc-tngjn\" (UID: \"a6047db8-60b6-4b1d-94d0-9934475fb39e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.616097 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.616132 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6047db8-60b6-4b1d-94d0-9934475fb39e-config\") pod \"dnsmasq-dns-675f4bcbfc-tngjn\" (UID: \"a6047db8-60b6-4b1d-94d0-9934475fb39e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.616879 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-config\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.616963 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6047db8-60b6-4b1d-94d0-9934475fb39e-config\") pod \"dnsmasq-dns-675f4bcbfc-tngjn\" (UID: \"a6047db8-60b6-4b1d-94d0-9934475fb39e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.616997 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.637353 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-278cb\" (UniqueName: \"kubernetes.io/projected/ea64ca1b-5302-40cc-9918-810b75c36240-kube-api-access-278cb\") pod \"dnsmasq-dns-78dd6ddcc-qtp9b\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.640888 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xvlt\" (UniqueName: \"kubernetes.io/projected/a6047db8-60b6-4b1d-94d0-9934475fb39e-kube-api-access-8xvlt\") pod \"dnsmasq-dns-675f4bcbfc-tngjn\" (UID: \"a6047db8-60b6-4b1d-94d0-9934475fb39e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.756580 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:01 crc kubenswrapper[4793]: I0130 14:04:01.823420 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:02 crc kubenswrapper[4793]: I0130 14:04:02.097308 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qtp9b"] Jan 30 14:04:02 crc kubenswrapper[4793]: I0130 14:04:02.102631 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:04:02 crc kubenswrapper[4793]: I0130 14:04:02.189021 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tngjn"] Jan 30 14:04:02 crc kubenswrapper[4793]: W0130 14:04:02.191486 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6047db8_60b6_4b1d_94d0_9934475fb39e.slice/crio-0e74e31437b5ab3a1ef1d51edaf0ec5456ff4ca346069331e5b2b21dd1a4df28 WatchSource:0}: Error finding container 0e74e31437b5ab3a1ef1d51edaf0ec5456ff4ca346069331e5b2b21dd1a4df28: Status 404 returned error can't find the container with id 0e74e31437b5ab3a1ef1d51edaf0ec5456ff4ca346069331e5b2b21dd1a4df28 Jan 30 14:04:02 crc kubenswrapper[4793]: I0130 14:04:02.518828 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" event={"ID":"a6047db8-60b6-4b1d-94d0-9934475fb39e","Type":"ContainerStarted","Data":"0e74e31437b5ab3a1ef1d51edaf0ec5456ff4ca346069331e5b2b21dd1a4df28"} Jan 30 14:04:02 crc kubenswrapper[4793]: I0130 14:04:02.521411 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" event={"ID":"ea64ca1b-5302-40cc-9918-810b75c36240","Type":"ContainerStarted","Data":"ee3c031683159179731efba2dde35050df6b60a59cdc2e43e0c06f26ed4f9d1f"} Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.180039 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tngjn"] Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.223041 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6twpw"] Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.224228 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.241038 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6twpw"] Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.379422 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.379484 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-config\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.379506 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw6fw\" (UniqueName: \"kubernetes.io/projected/57f8cfde-399c-43ec-bf72-e96f12a05ae2-kube-api-access-mw6fw\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.480543 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.480606 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-config\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.480631 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw6fw\" (UniqueName: \"kubernetes.io/projected/57f8cfde-399c-43ec-bf72-e96f12a05ae2-kube-api-access-mw6fw\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.481986 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.482504 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-config\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.552419 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw6fw\" (UniqueName: \"kubernetes.io/projected/57f8cfde-399c-43ec-bf72-e96f12a05ae2-kube-api-access-mw6fw\") pod \"dnsmasq-dns-666b6646f7-6twpw\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.606984 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qtp9b"] Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.630788 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vfvss"] Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.631944 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.659472 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vfvss"] Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.786210 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lhk6\" (UniqueName: \"kubernetes.io/projected/4ebaeca8-f301-4d75-8691-98415ddcf7e2-kube-api-access-7lhk6\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.786289 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.786364 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-config\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.841612 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.893495 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-config\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.893793 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lhk6\" (UniqueName: \"kubernetes.io/projected/4ebaeca8-f301-4d75-8691-98415ddcf7e2-kube-api-access-7lhk6\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.893841 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.894927 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.895032 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-config\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.925169 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lhk6\" (UniqueName: \"kubernetes.io/projected/4ebaeca8-f301-4d75-8691-98415ddcf7e2-kube-api-access-7lhk6\") pod \"dnsmasq-dns-57d769cc4f-vfvss\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:04 crc kubenswrapper[4793]: I0130 14:04:04.964634 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.348320 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6twpw"] Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.443241 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.444348 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.449918 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.455954 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.457099 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.457246 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.463397 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.471963 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.472945 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.472950 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4mm4r" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.571377 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vfvss"] Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576016 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rck4w\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-kube-api-access-rck4w\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576195 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ab4371b-53c0-41a1-9561-0c02f936c7a7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576285 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576369 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576455 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ab4371b-53c0-41a1-9561-0c02f936c7a7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576524 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576610 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576686 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576787 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-config-data\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576887 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.576981 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.611728 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" event={"ID":"57f8cfde-399c-43ec-bf72-e96f12a05ae2","Type":"ContainerStarted","Data":"b6d25f5f6c7c96e5312511cdf0154bdf3db1eff34982a8bfa221c443bb69496c"} Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701168 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-config-data\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701222 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701249 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701285 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rck4w\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-kube-api-access-rck4w\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701308 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ab4371b-53c0-41a1-9561-0c02f936c7a7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701331 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701360 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701386 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ab4371b-53c0-41a1-9561-0c02f936c7a7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701405 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701439 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.701467 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.702304 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.702820 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-config-data\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.703657 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.716257 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.716598 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.730635 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ab4371b-53c0-41a1-9561-0c02f936c7a7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.731307 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.741648 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ab4371b-53c0-41a1-9561-0c02f936c7a7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.779465 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.807956 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.812910 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rck4w\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-kube-api-access-rck4w\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.820936 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.822276 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.828916 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.828933 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.829619 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.829797 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.830412 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.830648 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dkqxx" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.830758 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.833697 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " pod="openstack/rabbitmq-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.896912 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916766 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916804 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f59v5\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-kube-api-access-f59v5\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916850 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916877 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a4cd276-23a5-4acb-bb1b-41470a11c945-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916895 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a4cd276-23a5-4acb-bb1b-41470a11c945-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916912 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916929 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916950 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916969 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.916999 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:05 crc kubenswrapper[4793]: I0130 14:04:05.917014 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018308 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a4cd276-23a5-4acb-bb1b-41470a11c945-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018518 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018538 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018561 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018579 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018611 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018626 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018655 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018669 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f59v5\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-kube-api-access-f59v5\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018709 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.018733 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a4cd276-23a5-4acb-bb1b-41470a11c945-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.019932 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.022361 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.030750 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.032158 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.038320 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.050780 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a4cd276-23a5-4acb-bb1b-41470a11c945-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.050892 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a4cd276-23a5-4acb-bb1b-41470a11c945-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.052394 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.052910 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f59v5\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-kube-api-access-f59v5\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.063369 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.052029 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.076677 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.088248 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.214313 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.650332 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" event={"ID":"4ebaeca8-f301-4d75-8691-98415ddcf7e2","Type":"ContainerStarted","Data":"a95902e824bd19a3e1746ccd97d0b63e3b3629d4c2754b4eeaeedb289cd0a81a"} Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.915963 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:04:06 crc kubenswrapper[4793]: I0130 14:04:06.981489 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.032233 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.033351 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.039697 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-lmpfw" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.039925 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.040134 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.040550 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.042225 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.065073 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.139115 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.139192 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-config-data-default\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.139272 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.139332 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9gw4\" (UniqueName: \"kubernetes.io/projected/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-kube-api-access-p9gw4\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.139362 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.139417 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.139501 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.139538 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-kolla-config\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.246323 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-kolla-config\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.246378 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.246410 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-config-data-default\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.246477 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.246520 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9gw4\" (UniqueName: \"kubernetes.io/projected/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-kube-api-access-p9gw4\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.246645 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.246671 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.246712 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.248465 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.248712 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-kolla-config\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.248931 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.249131 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-config-data-default\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.250763 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.272529 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9gw4\" (UniqueName: \"kubernetes.io/projected/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-kube-api-access-p9gw4\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.272913 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.283621 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f45b0069-4cb7-4dfd-ac2d-1473cacbde1f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.290757 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f\") " pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.466584 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.682786 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ab4371b-53c0-41a1-9561-0c02f936c7a7","Type":"ContainerStarted","Data":"0efe8f891a233c8e5ac4fe6bb1b425a66ddbc8f34f8412134d77a42240eb7c39"} Jan 30 14:04:07 crc kubenswrapper[4793]: I0130 14:04:07.700395 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a4cd276-23a5-4acb-bb1b-41470a11c945","Type":"ContainerStarted","Data":"49420acdae0565905cd8f73dba3384bd4f0c8ed41985335ead11f16b3b125159"} Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.040895 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.100291 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.141145 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.147857 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-hb24d" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.147918 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.148069 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.149184 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.150721 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.274812 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.274884 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.274918 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e0025f-6abc-4554-b7a0-c132607aec86-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.274945 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.274971 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.275026 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/41e0025f-6abc-4554-b7a0-c132607aec86-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.275065 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6brxc\" (UniqueName: \"kubernetes.io/projected/41e0025f-6abc-4554-b7a0-c132607aec86-kube-api-access-6brxc\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.275102 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e0025f-6abc-4554-b7a0-c132607aec86-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.377834 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/41e0025f-6abc-4554-b7a0-c132607aec86-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.377885 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6brxc\" (UniqueName: \"kubernetes.io/projected/41e0025f-6abc-4554-b7a0-c132607aec86-kube-api-access-6brxc\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.377923 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e0025f-6abc-4554-b7a0-c132607aec86-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.377950 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.377974 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.378000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e0025f-6abc-4554-b7a0-c132607aec86-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.378021 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.378042 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.378902 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.379145 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.379332 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/41e0025f-6abc-4554-b7a0-c132607aec86-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.379761 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.382527 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41e0025f-6abc-4554-b7a0-c132607aec86-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.398456 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e0025f-6abc-4554-b7a0-c132607aec86-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.411966 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e0025f-6abc-4554-b7a0-c132607aec86-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.414374 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6brxc\" (UniqueName: \"kubernetes.io/projected/41e0025f-6abc-4554-b7a0-c132607aec86-kube-api-access-6brxc\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.443598 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"41e0025f-6abc-4554-b7a0-c132607aec86\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.528665 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.627192 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.628107 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.631314 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.631528 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kn5v2" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.631661 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.653893 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.684275 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89e99d15-97ad-4ac5-ba68-82ef88460222-config-data\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.684324 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e99d15-97ad-4ac5-ba68-82ef88460222-combined-ca-bundle\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.684353 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp8t4\" (UniqueName: \"kubernetes.io/projected/89e99d15-97ad-4ac5-ba68-82ef88460222-kube-api-access-qp8t4\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.684399 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e99d15-97ad-4ac5-ba68-82ef88460222-memcached-tls-certs\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.684426 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89e99d15-97ad-4ac5-ba68-82ef88460222-kolla-config\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.788267 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e99d15-97ad-4ac5-ba68-82ef88460222-combined-ca-bundle\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.788316 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp8t4\" (UniqueName: \"kubernetes.io/projected/89e99d15-97ad-4ac5-ba68-82ef88460222-kube-api-access-qp8t4\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.788369 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e99d15-97ad-4ac5-ba68-82ef88460222-memcached-tls-certs\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.788398 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89e99d15-97ad-4ac5-ba68-82ef88460222-kolla-config\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.788459 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89e99d15-97ad-4ac5-ba68-82ef88460222-config-data\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.791832 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89e99d15-97ad-4ac5-ba68-82ef88460222-kolla-config\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.801696 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89e99d15-97ad-4ac5-ba68-82ef88460222-config-data\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.806098 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f","Type":"ContainerStarted","Data":"06e458f281786a13b324b174ac35ae3b7301d1d2d20e5f80ac0fd053e95b543a"} Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.812715 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp8t4\" (UniqueName: \"kubernetes.io/projected/89e99d15-97ad-4ac5-ba68-82ef88460222-kube-api-access-qp8t4\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.812907 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e99d15-97ad-4ac5-ba68-82ef88460222-combined-ca-bundle\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.813530 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e99d15-97ad-4ac5-ba68-82ef88460222-memcached-tls-certs\") pod \"memcached-0\" (UID: \"89e99d15-97ad-4ac5-ba68-82ef88460222\") " pod="openstack/memcached-0" Jan 30 14:04:08 crc kubenswrapper[4793]: I0130 14:04:08.958859 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 14:04:09 crc kubenswrapper[4793]: I0130 14:04:09.488246 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 14:04:09 crc kubenswrapper[4793]: I0130 14:04:09.832143 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 14:04:09 crc kubenswrapper[4793]: I0130 14:04:09.872410 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"41e0025f-6abc-4554-b7a0-c132607aec86","Type":"ContainerStarted","Data":"a2416f0e9999abe6cf0b1693538e57bb731071a12bc060d17ec264849e142bf1"} Jan 30 14:04:10 crc kubenswrapper[4793]: I0130 14:04:10.919718 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"89e99d15-97ad-4ac5-ba68-82ef88460222","Type":"ContainerStarted","Data":"5dcc56db407340685fbbe2c142bb6566727831beca81ef596fce19fbee41c708"} Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.036610 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.037766 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.042721 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-dz4v6" Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.082579 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.185343 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g555f\" (UniqueName: \"kubernetes.io/projected/e61af9bc-c79d-4e81-a602-37afbdc017a5-kube-api-access-g555f\") pod \"kube-state-metrics-0\" (UID: \"e61af9bc-c79d-4e81-a602-37afbdc017a5\") " pod="openstack/kube-state-metrics-0" Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.289454 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g555f\" (UniqueName: \"kubernetes.io/projected/e61af9bc-c79d-4e81-a602-37afbdc017a5-kube-api-access-g555f\") pod \"kube-state-metrics-0\" (UID: \"e61af9bc-c79d-4e81-a602-37afbdc017a5\") " pod="openstack/kube-state-metrics-0" Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.319852 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g555f\" (UniqueName: \"kubernetes.io/projected/e61af9bc-c79d-4e81-a602-37afbdc017a5-kube-api-access-g555f\") pod \"kube-state-metrics-0\" (UID: \"e61af9bc-c79d-4e81-a602-37afbdc017a5\") " pod="openstack/kube-state-metrics-0" Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.381386 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 14:04:11 crc kubenswrapper[4793]: I0130 14:04:11.993994 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:04:13 crc kubenswrapper[4793]: I0130 14:04:13.000698 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e61af9bc-c79d-4e81-a602-37afbdc017a5","Type":"ContainerStarted","Data":"71bf22217d9be03e116230139d0442df663407d89a0d201f8b40fe58cd8686cf"} Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.003039 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-45fd5"] Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.004378 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.012669 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-4kssx" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.012767 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.018935 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-45fd5"] Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.027867 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.065314 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.082510 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.082617 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.096129 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.096265 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-9s4dn" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.096320 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.096463 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.096610 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.176447 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-log-ovn\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.176554 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-run-ovn\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.176625 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230700ff-5087-4d0d-9d93-90b597d2ef72-combined-ca-bundle\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.176674 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-run\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.176724 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/230700ff-5087-4d0d-9d93-90b597d2ef72-scripts\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.176779 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm8kh\" (UniqueName: \"kubernetes.io/projected/230700ff-5087-4d0d-9d93-90b597d2ef72-kube-api-access-qm8kh\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.176807 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/230700ff-5087-4d0d-9d93-90b597d2ef72-ovn-controller-tls-certs\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.203184 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-56x4d"] Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.205774 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278404 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-run-ovn\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278471 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bfa8998b-ee3a-4aea-80e8-c59620a5308a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278499 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230700ff-5087-4d0d-9d93-90b597d2ef72-combined-ca-bundle\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278522 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278546 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-run\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278575 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278597 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ls5x\" (UniqueName: \"kubernetes.io/projected/bfa8998b-ee3a-4aea-80e8-c59620a5308a-kube-api-access-7ls5x\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278618 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/230700ff-5087-4d0d-9d93-90b597d2ef72-scripts\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278641 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa8998b-ee3a-4aea-80e8-c59620a5308a-config\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278659 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278690 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm8kh\" (UniqueName: \"kubernetes.io/projected/230700ff-5087-4d0d-9d93-90b597d2ef72-kube-api-access-qm8kh\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278710 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/230700ff-5087-4d0d-9d93-90b597d2ef72-ovn-controller-tls-certs\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278744 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-log-ovn\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278766 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa8998b-ee3a-4aea-80e8-c59620a5308a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.278794 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.279408 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-run-ovn\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.281486 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/230700ff-5087-4d0d-9d93-90b597d2ef72-scripts\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.281491 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-run\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.349097 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/230700ff-5087-4d0d-9d93-90b597d2ef72-var-log-ovn\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.349879 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/230700ff-5087-4d0d-9d93-90b597d2ef72-ovn-controller-tls-certs\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.350603 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230700ff-5087-4d0d-9d93-90b597d2ef72-combined-ca-bundle\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.350648 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-56x4d"] Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.369671 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm8kh\" (UniqueName: \"kubernetes.io/projected/230700ff-5087-4d0d-9d93-90b597d2ef72-kube-api-access-qm8kh\") pod \"ovn-controller-45fd5\" (UID: \"230700ff-5087-4d0d-9d93-90b597d2ef72\") " pod="openstack/ovn-controller-45fd5" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381123 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bfa8998b-ee3a-4aea-80e8-c59620a5308a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381167 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381195 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381212 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ls5x\" (UniqueName: \"kubernetes.io/projected/bfa8998b-ee3a-4aea-80e8-c59620a5308a-kube-api-access-7ls5x\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381247 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa8998b-ee3a-4aea-80e8-c59620a5308a-config\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381269 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381296 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntd56\" (UniqueName: \"kubernetes.io/projected/f6d71a04-6d3d-4444-9963-950135c3d6da-kube-api-access-ntd56\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381348 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6d71a04-6d3d-4444-9963-950135c3d6da-scripts\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381372 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-lib\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381397 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-run\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381416 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-log\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381442 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa8998b-ee3a-4aea-80e8-c59620a5308a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381475 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.381498 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-etc-ovs\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.382019 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bfa8998b-ee3a-4aea-80e8-c59620a5308a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.382426 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.382903 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa8998b-ee3a-4aea-80e8-c59620a5308a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.383993 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa8998b-ee3a-4aea-80e8-c59620a5308a-config\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.424766 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.428932 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.429508 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa8998b-ee3a-4aea-80e8-c59620a5308a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.440147 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ls5x\" (UniqueName: \"kubernetes.io/projected/bfa8998b-ee3a-4aea-80e8-c59620a5308a-kube-api-access-7ls5x\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.467289 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bfa8998b-ee3a-4aea-80e8-c59620a5308a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.483356 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-etc-ovs\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.483460 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntd56\" (UniqueName: \"kubernetes.io/projected/f6d71a04-6d3d-4444-9963-950135c3d6da-kube-api-access-ntd56\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.483546 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6d71a04-6d3d-4444-9963-950135c3d6da-scripts\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.483571 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-lib\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.483591 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-run\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.483606 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-log\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.483944 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-log\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.485094 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-etc-ovs\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.485251 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-lib\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.485318 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6d71a04-6d3d-4444-9963-950135c3d6da-var-run\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.487925 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.499986 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6d71a04-6d3d-4444-9963-950135c3d6da-scripts\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.522793 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntd56\" (UniqueName: \"kubernetes.io/projected/f6d71a04-6d3d-4444-9963-950135c3d6da-kube-api-access-ntd56\") pod \"ovn-controller-ovs-56x4d\" (UID: \"f6d71a04-6d3d-4444-9963-950135c3d6da\") " pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.552448 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:14 crc kubenswrapper[4793]: I0130 14:04:14.648568 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-45fd5" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.841957 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.851767 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.854850 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.855028 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9qtfg" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.855198 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.855736 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.863273 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.982194 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.982263 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/285be7d6-1f03-43af-8087-46ba257183ec-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.982321 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/285be7d6-1f03-43af-8087-46ba257183ec-config\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.982349 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.982380 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v45t5\" (UniqueName: \"kubernetes.io/projected/285be7d6-1f03-43af-8087-46ba257183ec-kube-api-access-v45t5\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.982502 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.982537 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:17 crc kubenswrapper[4793]: I0130 14:04:17.982569 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/285be7d6-1f03-43af-8087-46ba257183ec-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.083718 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/285be7d6-1f03-43af-8087-46ba257183ec-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.083793 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.083818 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/285be7d6-1f03-43af-8087-46ba257183ec-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.083856 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/285be7d6-1f03-43af-8087-46ba257183ec-config\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.083873 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.083893 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v45t5\" (UniqueName: \"kubernetes.io/projected/285be7d6-1f03-43af-8087-46ba257183ec-kube-api-access-v45t5\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.083923 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.083945 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.084259 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.086285 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/285be7d6-1f03-43af-8087-46ba257183ec-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.086327 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/285be7d6-1f03-43af-8087-46ba257183ec-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.087934 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/285be7d6-1f03-43af-8087-46ba257183ec-config\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.099007 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.099375 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.100215 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285be7d6-1f03-43af-8087-46ba257183ec-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.105149 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v45t5\" (UniqueName: \"kubernetes.io/projected/285be7d6-1f03-43af-8087-46ba257183ec-kube-api-access-v45t5\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.105896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"285be7d6-1f03-43af-8087-46ba257183ec\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:18 crc kubenswrapper[4793]: I0130 14:04:18.195458 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:29 crc kubenswrapper[4793]: E0130 14:04:29.752533 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 14:04:29 crc kubenswrapper[4793]: E0130 14:04:29.753286 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rck4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(0ab4371b-53c0-41a1-9561-0c02f936c7a7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:29 crc kubenswrapper[4793]: E0130 14:04:29.754438 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" Jan 30 14:04:29 crc kubenswrapper[4793]: E0130 14:04:29.767115 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 14:04:29 crc kubenswrapper[4793]: E0130 14:04:29.767716 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f59v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(5a4cd276-23a5-4acb-bb1b-41470a11c945): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:29 crc kubenswrapper[4793]: E0130 14:04:29.768965 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" Jan 30 14:04:30 crc kubenswrapper[4793]: E0130 14:04:30.186377 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" Jan 30 14:04:30 crc kubenswrapper[4793]: E0130 14:04:30.187319 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" Jan 30 14:04:32 crc kubenswrapper[4793]: E0130 14:04:32.882463 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 30 14:04:32 crc kubenswrapper[4793]: E0130 14:04:32.882656 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6brxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(41e0025f-6abc-4554-b7a0-c132607aec86): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:32 crc kubenswrapper[4793]: E0130 14:04:32.884192 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="41e0025f-6abc-4554-b7a0-c132607aec86" Jan 30 14:04:33 crc kubenswrapper[4793]: E0130 14:04:33.203192 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="41e0025f-6abc-4554-b7a0-c132607aec86" Jan 30 14:04:34 crc kubenswrapper[4793]: E0130 14:04:34.601888 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 30 14:04:34 crc kubenswrapper[4793]: E0130 14:04:34.602310 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:nb9h78h5d4h96h679h56ch556hcbhdh6dh68fh585h577h68dhc5h5h5dch5dch84h545h664h5ffhcbh596h58bh5f5h8dh67dh5hbdh84h577q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qp8t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(89e99d15-97ad-4ac5-ba68-82ef88460222): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:34 crc kubenswrapper[4793]: E0130 14:04:34.604028 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="89e99d15-97ad-4ac5-ba68-82ef88460222" Jan 30 14:04:35 crc kubenswrapper[4793]: E0130 14:04:35.226424 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="89e99d15-97ad-4ac5-ba68-82ef88460222" Jan 30 14:04:37 crc kubenswrapper[4793]: E0130 14:04:37.551255 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 30 14:04:37 crc kubenswrapper[4793]: E0130 14:04:37.551992 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9gw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(f45b0069-4cb7-4dfd-ac2d-1473cacbde1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:37 crc kubenswrapper[4793]: E0130 14:04:37.553441 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="f45b0069-4cb7-4dfd-ac2d-1473cacbde1f" Jan 30 14:04:37 crc kubenswrapper[4793]: I0130 14:04:37.800517 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-45fd5"] Jan 30 14:04:38 crc kubenswrapper[4793]: E0130 14:04:38.252349 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="f45b0069-4cb7-4dfd-ac2d-1473cacbde1f" Jan 30 14:04:42 crc kubenswrapper[4793]: W0130 14:04:42.476870 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod230700ff_5087_4d0d_9d93_90b597d2ef72.slice/crio-0497e633d51d231326624a55b74dba39fa0af0181bfded4d7119186802db32a7 WatchSource:0}: Error finding container 0497e633d51d231326624a55b74dba39fa0af0181bfded4d7119186802db32a7: Status 404 returned error can't find the container with id 0497e633d51d231326624a55b74dba39fa0af0181bfded4d7119186802db32a7 Jan 30 14:04:42 crc kubenswrapper[4793]: I0130 14:04:42.985800 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 14:04:43 crc kubenswrapper[4793]: I0130 14:04:43.289219 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-45fd5" event={"ID":"230700ff-5087-4d0d-9d93-90b597d2ef72","Type":"ContainerStarted","Data":"0497e633d51d231326624a55b74dba39fa0af0181bfded4d7119186802db32a7"} Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.583715 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.584974 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mw6fw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-6twpw_openstack(57f8cfde-399c-43ec-bf72-e96f12a05ae2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.586776 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.609244 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.609388 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-278cb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-qtp9b_openstack(ea64ca1b-5302-40cc-9918-810b75c36240): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.610669 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" podUID="ea64ca1b-5302-40cc-9918-810b75c36240" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.634116 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.634276 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8xvlt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-tngjn_openstack(a6047db8-60b6-4b1d-94d0-9934475fb39e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.635469 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" podUID="a6047db8-60b6-4b1d-94d0-9934475fb39e" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.675829 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.676030 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lhk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-vfvss_openstack(4ebaeca8-f301-4d75-8691-98415ddcf7e2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:04:43 crc kubenswrapper[4793]: E0130 14:04:43.677298 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" Jan 30 14:04:43 crc kubenswrapper[4793]: I0130 14:04:43.758781 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 14:04:44 crc kubenswrapper[4793]: I0130 14:04:44.069452 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-56x4d"] Jan 30 14:04:44 crc kubenswrapper[4793]: I0130 14:04:44.298427 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bfa8998b-ee3a-4aea-80e8-c59620a5308a","Type":"ContainerStarted","Data":"a578141da421138078dc94afb22e8ec18c67185d426c8e546c675b69f313a882"} Jan 30 14:04:44 crc kubenswrapper[4793]: I0130 14:04:44.300203 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"285be7d6-1f03-43af-8087-46ba257183ec","Type":"ContainerStarted","Data":"a6c864ea805244cc9f917b3520d929aaa74ad2b7a49a41c11a44442dc5a601c0"} Jan 30 14:04:44 crc kubenswrapper[4793]: E0130 14:04:44.301945 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" Jan 30 14:04:44 crc kubenswrapper[4793]: E0130 14:04:44.304654 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" Jan 30 14:04:44 crc kubenswrapper[4793]: W0130 14:04:44.631934 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6d71a04_6d3d_4444_9963_950135c3d6da.slice/crio-5b867f8c434aee6351f262d8f4a956b837d686a77bf5b0ec609636f858a04ea6 WatchSource:0}: Error finding container 5b867f8c434aee6351f262d8f4a956b837d686a77bf5b0ec609636f858a04ea6: Status 404 returned error can't find the container with id 5b867f8c434aee6351f262d8f4a956b837d686a77bf5b0ec609636f858a04ea6 Jan 30 14:04:44 crc kubenswrapper[4793]: E0130 14:04:44.696936 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 30 14:04:44 crc kubenswrapper[4793]: E0130 14:04:44.697103 4793 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 30 14:04:44 crc kubenswrapper[4793]: E0130 14:04:44.697303 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g555f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(e61af9bc-c79d-4e81-a602-37afbdc017a5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 14:04:44 crc kubenswrapper[4793]: E0130 14:04:44.699392 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="e61af9bc-c79d-4e81-a602-37afbdc017a5" Jan 30 14:04:44 crc kubenswrapper[4793]: I0130 14:04:44.881571 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:44 crc kubenswrapper[4793]: I0130 14:04:44.936568 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.037514 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-dns-svc\") pod \"ea64ca1b-5302-40cc-9918-810b75c36240\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.037572 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-config\") pod \"ea64ca1b-5302-40cc-9918-810b75c36240\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.037626 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xvlt\" (UniqueName: \"kubernetes.io/projected/a6047db8-60b6-4b1d-94d0-9934475fb39e-kube-api-access-8xvlt\") pod \"a6047db8-60b6-4b1d-94d0-9934475fb39e\" (UID: \"a6047db8-60b6-4b1d-94d0-9934475fb39e\") " Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.037664 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6047db8-60b6-4b1d-94d0-9934475fb39e-config\") pod \"a6047db8-60b6-4b1d-94d0-9934475fb39e\" (UID: \"a6047db8-60b6-4b1d-94d0-9934475fb39e\") " Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.037689 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-278cb\" (UniqueName: \"kubernetes.io/projected/ea64ca1b-5302-40cc-9918-810b75c36240-kube-api-access-278cb\") pod \"ea64ca1b-5302-40cc-9918-810b75c36240\" (UID: \"ea64ca1b-5302-40cc-9918-810b75c36240\") " Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.038164 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-config" (OuterVolumeSpecName: "config") pod "ea64ca1b-5302-40cc-9918-810b75c36240" (UID: "ea64ca1b-5302-40cc-9918-810b75c36240"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.038315 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ea64ca1b-5302-40cc-9918-810b75c36240" (UID: "ea64ca1b-5302-40cc-9918-810b75c36240"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.039081 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6047db8-60b6-4b1d-94d0-9934475fb39e-config" (OuterVolumeSpecName: "config") pod "a6047db8-60b6-4b1d-94d0-9934475fb39e" (UID: "a6047db8-60b6-4b1d-94d0-9934475fb39e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.042489 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6047db8-60b6-4b1d-94d0-9934475fb39e-kube-api-access-8xvlt" (OuterVolumeSpecName: "kube-api-access-8xvlt") pod "a6047db8-60b6-4b1d-94d0-9934475fb39e" (UID: "a6047db8-60b6-4b1d-94d0-9934475fb39e"). InnerVolumeSpecName "kube-api-access-8xvlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.043061 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea64ca1b-5302-40cc-9918-810b75c36240-kube-api-access-278cb" (OuterVolumeSpecName: "kube-api-access-278cb") pod "ea64ca1b-5302-40cc-9918-810b75c36240" (UID: "ea64ca1b-5302-40cc-9918-810b75c36240"). InnerVolumeSpecName "kube-api-access-278cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.139526 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.139557 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea64ca1b-5302-40cc-9918-810b75c36240-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.139568 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xvlt\" (UniqueName: \"kubernetes.io/projected/a6047db8-60b6-4b1d-94d0-9934475fb39e-kube-api-access-8xvlt\") on node \"crc\" DevicePath \"\"" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.139580 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6047db8-60b6-4b1d-94d0-9934475fb39e-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.139593 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-278cb\" (UniqueName: \"kubernetes.io/projected/ea64ca1b-5302-40cc-9918-810b75c36240-kube-api-access-278cb\") on node \"crc\" DevicePath \"\"" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.309388 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" event={"ID":"ea64ca1b-5302-40cc-9918-810b75c36240","Type":"ContainerDied","Data":"ee3c031683159179731efba2dde35050df6b60a59cdc2e43e0c06f26ed4f9d1f"} Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.309466 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qtp9b" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.313147 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-56x4d" event={"ID":"f6d71a04-6d3d-4444-9963-950135c3d6da","Type":"ContainerStarted","Data":"5b867f8c434aee6351f262d8f4a956b837d686a77bf5b0ec609636f858a04ea6"} Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.315802 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" event={"ID":"a6047db8-60b6-4b1d-94d0-9934475fb39e","Type":"ContainerDied","Data":"0e74e31437b5ab3a1ef1d51edaf0ec5456ff4ca346069331e5b2b21dd1a4df28"} Jan 30 14:04:45 crc kubenswrapper[4793]: E0130 14:04:45.316543 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="e61af9bc-c79d-4e81-a602-37afbdc017a5" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.316693 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tngjn" Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.406338 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qtp9b"] Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.417764 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qtp9b"] Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.439251 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tngjn"] Jan 30 14:04:45 crc kubenswrapper[4793]: I0130 14:04:45.447561 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tngjn"] Jan 30 14:04:46 crc kubenswrapper[4793]: I0130 14:04:46.321739 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a4cd276-23a5-4acb-bb1b-41470a11c945","Type":"ContainerStarted","Data":"d616170562eeb4ba00ef47dc4bae339cb080a28d5310b1ec237e9ad217b38991"} Jan 30 14:04:46 crc kubenswrapper[4793]: I0130 14:04:46.326168 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ab4371b-53c0-41a1-9561-0c02f936c7a7","Type":"ContainerStarted","Data":"06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48"} Jan 30 14:04:46 crc kubenswrapper[4793]: I0130 14:04:46.410287 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6047db8-60b6-4b1d-94d0-9934475fb39e" path="/var/lib/kubelet/pods/a6047db8-60b6-4b1d-94d0-9934475fb39e/volumes" Jan 30 14:04:46 crc kubenswrapper[4793]: I0130 14:04:46.410759 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea64ca1b-5302-40cc-9918-810b75c36240" path="/var/lib/kubelet/pods/ea64ca1b-5302-40cc-9918-810b75c36240/volumes" Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.408410 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"285be7d6-1f03-43af-8087-46ba257183ec","Type":"ContainerStarted","Data":"5f977086a20135b5c73312cd73f299f0c72f0872684a6d3b87673481e31d8f46"} Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.409208 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-45fd5" Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.409243 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"41e0025f-6abc-4554-b7a0-c132607aec86","Type":"ContainerStarted","Data":"a5f690625509d9f182522efae60dbd8b14b995b3093c366d0783ec9f47faf44f"} Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.409274 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-45fd5" event={"ID":"230700ff-5087-4d0d-9d93-90b597d2ef72","Type":"ContainerStarted","Data":"5b237d565754ec86efd0a672aecff5cd47e2a2edf65044217fa18c12e2cddad3"} Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.409288 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"89e99d15-97ad-4ac5-ba68-82ef88460222","Type":"ContainerStarted","Data":"6c7459b57017b64fa7fafbd9f1661b0078e148ac66792474ac9fc92f81b472a4"} Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.409986 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.410265 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bfa8998b-ee3a-4aea-80e8-c59620a5308a","Type":"ContainerStarted","Data":"95a3843fa64746a2ae326f96cf6556335e8a8fc9fe27e573d8ff111ced9b3403"} Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.412722 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-56x4d" event={"ID":"f6d71a04-6d3d-4444-9963-950135c3d6da","Type":"ContainerDied","Data":"98df26c156510140f51b0afd7722ffaa1126f3e1b6a146ea7bd95ff308fac46b"} Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.413161 4793 generic.go:334] "Generic (PLEG): container finished" podID="f6d71a04-6d3d-4444-9963-950135c3d6da" containerID="98df26c156510140f51b0afd7722ffaa1126f3e1b6a146ea7bd95ff308fac46b" exitCode=0 Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.415097 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f","Type":"ContainerStarted","Data":"133d4bcbeb7456f153385eff906c7efb12649856c47bafc5796c8ad2d5657a75"} Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.430624 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-45fd5" podStartSLOduration=30.604106195 podStartE2EDuration="41.430603681s" podCreationTimestamp="2026-01-30 14:04:13 +0000 UTC" firstStartedPulling="2026-01-30 14:04:42.478785147 +0000 UTC m=+1293.180133638" lastFinishedPulling="2026-01-30 14:04:53.305282633 +0000 UTC m=+1304.006631124" observedRunningTime="2026-01-30 14:04:54.42159866 +0000 UTC m=+1305.122947161" watchObservedRunningTime="2026-01-30 14:04:54.430603681 +0000 UTC m=+1305.131952172" Jan 30 14:04:54 crc kubenswrapper[4793]: I0130 14:04:54.519174 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.010303573 podStartE2EDuration="46.519154742s" podCreationTimestamp="2026-01-30 14:04:08 +0000 UTC" firstStartedPulling="2026-01-30 14:04:09.911986677 +0000 UTC m=+1260.613335168" lastFinishedPulling="2026-01-30 14:04:53.420837806 +0000 UTC m=+1304.122186337" observedRunningTime="2026-01-30 14:04:54.514537979 +0000 UTC m=+1305.215886480" watchObservedRunningTime="2026-01-30 14:04:54.519154742 +0000 UTC m=+1305.220503233" Jan 30 14:04:55 crc kubenswrapper[4793]: I0130 14:04:55.426012 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-56x4d" event={"ID":"f6d71a04-6d3d-4444-9963-950135c3d6da","Type":"ContainerStarted","Data":"aa17ab4cf043ac7bf510f1a779d7a49c0b8bc619c395d3dfa5231c885d485193"} Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.433352 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bfa8998b-ee3a-4aea-80e8-c59620a5308a","Type":"ContainerStarted","Data":"7373e9ef498cd121e57fc24eb191a80970b3c3bae2c9482b6bca66cad3fa8fdd"} Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.435833 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-56x4d" event={"ID":"f6d71a04-6d3d-4444-9963-950135c3d6da","Type":"ContainerStarted","Data":"bb31e04ec262f0558eb898cc652abac461a20ac4bc486d22c80fbbc39c3c7bdd"} Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.435999 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.436220 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.438325 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"285be7d6-1f03-43af-8087-46ba257183ec","Type":"ContainerStarted","Data":"92d9f11da992a79894aa252d4fbcd2a2ad7caedd58a70a7c719fcca59c378de2"} Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.440101 4793 generic.go:334] "Generic (PLEG): container finished" podID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" containerID="3f70174b11e96cdd2d573d9ee24e4219762e2a0529f8d646d037440b2831590b" exitCode=0 Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.440163 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" event={"ID":"4ebaeca8-f301-4d75-8691-98415ddcf7e2","Type":"ContainerDied","Data":"3f70174b11e96cdd2d573d9ee24e4219762e2a0529f8d646d037440b2831590b"} Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.441711 4793 generic.go:334] "Generic (PLEG): container finished" podID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" containerID="e55e6db12bc091de69952e0e4d9fe2c04ddaa0a5ca5e5c173912be87073539b1" exitCode=0 Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.441822 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" event={"ID":"57f8cfde-399c-43ec-bf72-e96f12a05ae2","Type":"ContainerDied","Data":"e55e6db12bc091de69952e0e4d9fe2c04ddaa0a5ca5e5c173912be87073539b1"} Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.485720 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=32.705700186 podStartE2EDuration="44.485699064s" podCreationTimestamp="2026-01-30 14:04:12 +0000 UTC" firstStartedPulling="2026-01-30 14:04:43.5478538 +0000 UTC m=+1294.249202331" lastFinishedPulling="2026-01-30 14:04:55.327852718 +0000 UTC m=+1306.029201209" observedRunningTime="2026-01-30 14:04:56.460415224 +0000 UTC m=+1307.161763725" watchObservedRunningTime="2026-01-30 14:04:56.485699064 +0000 UTC m=+1307.187047565" Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.488957 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.557550 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-56x4d" podStartSLOduration=33.955135829 podStartE2EDuration="42.557485834s" podCreationTimestamp="2026-01-30 14:04:14 +0000 UTC" firstStartedPulling="2026-01-30 14:04:44.700462877 +0000 UTC m=+1295.401811368" lastFinishedPulling="2026-01-30 14:04:53.302812882 +0000 UTC m=+1304.004161373" observedRunningTime="2026-01-30 14:04:56.543111352 +0000 UTC m=+1307.244459883" watchObservedRunningTime="2026-01-30 14:04:56.557485834 +0000 UTC m=+1307.258834325" Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.557835 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:56 crc kubenswrapper[4793]: I0130 14:04:56.576781 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=29.068562942 podStartE2EDuration="40.576760046s" podCreationTimestamp="2026-01-30 14:04:16 +0000 UTC" firstStartedPulling="2026-01-30 14:04:43.80357431 +0000 UTC m=+1294.504922801" lastFinishedPulling="2026-01-30 14:04:55.311771414 +0000 UTC m=+1306.013119905" observedRunningTime="2026-01-30 14:04:56.568106634 +0000 UTC m=+1307.269455155" watchObservedRunningTime="2026-01-30 14:04:56.576760046 +0000 UTC m=+1307.278108537" Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.195914 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.246292 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:57 crc kubenswrapper[4793]: E0130 14:04:57.305384 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf45b0069_4cb7_4dfd_ac2d_1473cacbde1f.slice/crio-133d4bcbeb7456f153385eff906c7efb12649856c47bafc5796c8ad2d5657a75.scope\": RecentStats: unable to find data in memory cache]" Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.452315 4793 generic.go:334] "Generic (PLEG): container finished" podID="f45b0069-4cb7-4dfd-ac2d-1473cacbde1f" containerID="133d4bcbeb7456f153385eff906c7efb12649856c47bafc5796c8ad2d5657a75" exitCode=0 Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.452356 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f","Type":"ContainerDied","Data":"133d4bcbeb7456f153385eff906c7efb12649856c47bafc5796c8ad2d5657a75"} Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.455486 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" event={"ID":"4ebaeca8-f301-4d75-8691-98415ddcf7e2","Type":"ContainerStarted","Data":"b2b7d7383e6d798392eb551693b015b04e338eaf766fb65a0aced7e6d9610689"} Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.456657 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.459113 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" event={"ID":"57f8cfde-399c-43ec-bf72-e96f12a05ae2","Type":"ContainerStarted","Data":"239a19f7152c99455b1d91f01ca7ce00ae83e90bc20fab1b576eaab8c2bb029f"} Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.465882 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.468494 4793 generic.go:334] "Generic (PLEG): container finished" podID="41e0025f-6abc-4554-b7a0-c132607aec86" containerID="a5f690625509d9f182522efae60dbd8b14b995b3093c366d0783ec9f47faf44f" exitCode=0 Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.470093 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"41e0025f-6abc-4554-b7a0-c132607aec86","Type":"ContainerDied","Data":"a5f690625509d9f182522efae60dbd8b14b995b3093c366d0783ec9f47faf44f"} Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.493928 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.493961 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.520989 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" podStartSLOduration=3.263026386 podStartE2EDuration="53.520926433s" podCreationTimestamp="2026-01-30 14:04:04 +0000 UTC" firstStartedPulling="2026-01-30 14:04:05.615033264 +0000 UTC m=+1256.316381755" lastFinishedPulling="2026-01-30 14:04:55.872933311 +0000 UTC m=+1306.574281802" observedRunningTime="2026-01-30 14:04:57.512612739 +0000 UTC m=+1308.213961270" watchObservedRunningTime="2026-01-30 14:04:57.520926433 +0000 UTC m=+1308.222274944" Jan 30 14:04:57 crc kubenswrapper[4793]: I0130 14:04:57.570395 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" podStartSLOduration=3.1576393 podStartE2EDuration="53.570375075s" podCreationTimestamp="2026-01-30 14:04:04 +0000 UTC" firstStartedPulling="2026-01-30 14:04:05.40361895 +0000 UTC m=+1256.104967441" lastFinishedPulling="2026-01-30 14:04:55.816354715 +0000 UTC m=+1306.517703216" observedRunningTime="2026-01-30 14:04:57.555398589 +0000 UTC m=+1308.256747080" watchObservedRunningTime="2026-01-30 14:04:57.570375075 +0000 UTC m=+1308.271723566" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.255240 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.478688 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f45b0069-4cb7-4dfd-ac2d-1473cacbde1f","Type":"ContainerStarted","Data":"d056557fce99c07acb071a67afa2e1446c3feab1b82855ca8a754b04b8e74676"} Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.487297 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"41e0025f-6abc-4554-b7a0-c132607aec86","Type":"ContainerStarted","Data":"dddf25c087963445e2a1fc98cd0aa5ea8ba0709bb8e76a65ef0bcde18ddca387"} Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.508760 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.422228817 podStartE2EDuration="53.508022923s" podCreationTimestamp="2026-01-30 14:04:05 +0000 UTC" firstStartedPulling="2026-01-30 14:04:08.220607304 +0000 UTC m=+1258.921955795" lastFinishedPulling="2026-01-30 14:04:53.30640141 +0000 UTC m=+1304.007749901" observedRunningTime="2026-01-30 14:04:58.502891727 +0000 UTC m=+1309.204240218" watchObservedRunningTime="2026-01-30 14:04:58.508022923 +0000 UTC m=+1309.209371414" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.530298 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.530362 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.551055 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.706068017 podStartE2EDuration="51.551025437s" podCreationTimestamp="2026-01-30 14:04:07 +0000 UTC" firstStartedPulling="2026-01-30 14:04:09.577814373 +0000 UTC m=+1260.279162864" lastFinishedPulling="2026-01-30 14:04:53.422771793 +0000 UTC m=+1304.124120284" observedRunningTime="2026-01-30 14:04:58.542420597 +0000 UTC m=+1309.243769088" watchObservedRunningTime="2026-01-30 14:04:58.551025437 +0000 UTC m=+1309.252373928" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.571560 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vfvss"] Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.617146 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-znzw5"] Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.618535 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.624248 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.657304 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-znzw5"] Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.688642 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vx7z5"] Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.689817 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.694314 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.726092 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vx7z5"] Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.787846 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-config\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.787899 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-combined-ca-bundle\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.787988 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8tks\" (UniqueName: \"kubernetes.io/projected/085da052-4aff-4c31-a5ac-398194b443a2-kube-api-access-h8tks\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.788020 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-ovn-rundir\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.788073 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-config\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.788112 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-ovs-rundir\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.788137 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt8lf\" (UniqueName: \"kubernetes.io/projected/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-kube-api-access-rt8lf\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.788155 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.788173 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.788187 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.884309 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891175 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-config\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891228 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-combined-ca-bundle\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891296 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8tks\" (UniqueName: \"kubernetes.io/projected/085da052-4aff-4c31-a5ac-398194b443a2-kube-api-access-h8tks\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891331 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-ovn-rundir\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891370 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-config\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891415 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-ovs-rundir\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891448 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt8lf\" (UniqueName: \"kubernetes.io/projected/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-kube-api-access-rt8lf\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891472 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891494 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.891512 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.892247 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-config\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.893033 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-config\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.906800 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-combined-ca-bundle\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.907427 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-ovn-rundir\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.907747 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-ovs-rundir\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.908492 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.909167 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.931782 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.935596 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt8lf\" (UniqueName: \"kubernetes.io/projected/2eaf3033-e5f4-48bc-bdee-b7d97e57e765-kube-api-access-rt8lf\") pod \"ovn-controller-metrics-vx7z5\" (UID: \"2eaf3033-e5f4-48bc-bdee-b7d97e57e765\") " pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.945166 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8tks\" (UniqueName: \"kubernetes.io/projected/085da052-4aff-4c31-a5ac-398194b443a2-kube-api-access-h8tks\") pod \"dnsmasq-dns-7f896c8c65-znzw5\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:58 crc kubenswrapper[4793]: I0130 14:04:58.962199 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.035571 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vx7z5" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.145692 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6twpw"] Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.187076 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn5sc"] Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.191500 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.195704 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.221294 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn5sc"] Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.245623 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.311785 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-config\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.311837 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.311872 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.311986 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnxfv\" (UniqueName: \"kubernetes.io/projected/6997fc47-52ce-4421-b8bc-14ad27f1d522-kube-api-access-vnxfv\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.312077 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.413663 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnxfv\" (UniqueName: \"kubernetes.io/projected/6997fc47-52ce-4421-b8bc-14ad27f1d522-kube-api-access-vnxfv\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.414014 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.414118 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-config\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.414138 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.414183 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.414949 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.415101 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.415686 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.415724 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-config\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.440202 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnxfv\" (UniqueName: \"kubernetes.io/projected/6997fc47-52ce-4421-b8bc-14ad27f1d522-kube-api-access-vnxfv\") pod \"dnsmasq-dns-86db49b7ff-jn5sc\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.506953 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" containerName="dnsmasq-dns" containerID="cri-o://239a19f7152c99455b1d91f01ca7ce00ae83e90bc20fab1b576eaab8c2bb029f" gracePeriod=10 Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.511158 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" containerName="dnsmasq-dns" containerID="cri-o://b2b7d7383e6d798392eb551693b015b04e338eaf766fb65a0aced7e6d9610689" gracePeriod=10 Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.584328 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.603385 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.605152 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.607169 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.607708 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.608160 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-g7cb6" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.608449 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.613564 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.800829 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vx7z5"] Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.829253 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.829306 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.829332 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.829370 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/270527bd-015e-4904-8916-07993e081611-config\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: W0130 14:04:59.829375 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2eaf3033_e5f4_48bc_bdee_b7d97e57e765.slice/crio-bd9d37bdb1810d24827a2a5ee11a475d8d037c0789fa55b6595cd8fa830b73a8 WatchSource:0}: Error finding container bd9d37bdb1810d24827a2a5ee11a475d8d037c0789fa55b6595cd8fa830b73a8: Status 404 returned error can't find the container with id bd9d37bdb1810d24827a2a5ee11a475d8d037c0789fa55b6595cd8fa830b73a8 Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.829429 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/270527bd-015e-4904-8916-07993e081611-scripts\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.829453 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmtdm\" (UniqueName: \"kubernetes.io/projected/270527bd-015e-4904-8916-07993e081611-kube-api-access-qmtdm\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.829521 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/270527bd-015e-4904-8916-07993e081611-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.941421 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/270527bd-015e-4904-8916-07993e081611-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.941794 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.941828 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.941849 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.941883 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/270527bd-015e-4904-8916-07993e081611-config\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.941945 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/270527bd-015e-4904-8916-07993e081611-scripts\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.941964 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmtdm\" (UniqueName: \"kubernetes.io/projected/270527bd-015e-4904-8916-07993e081611-kube-api-access-qmtdm\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.942708 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/270527bd-015e-4904-8916-07993e081611-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.947577 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.947629 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/270527bd-015e-4904-8916-07993e081611-scripts\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.948007 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/270527bd-015e-4904-8916-07993e081611-config\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.951339 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.960884 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/270527bd-015e-4904-8916-07993e081611-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.966996 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmtdm\" (UniqueName: \"kubernetes.io/projected/270527bd-015e-4904-8916-07993e081611-kube-api-access-qmtdm\") pod \"ovn-northd-0\" (UID: \"270527bd-015e-4904-8916-07993e081611\") " pod="openstack/ovn-northd-0" Jan 30 14:04:59 crc kubenswrapper[4793]: W0130 14:04:59.973691 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod085da052_4aff_4c31_a5ac_398194b443a2.slice/crio-8efe50ff2f65655237cd1366a8e44ae9853ecb34e841c999f896cdadf8ea3a48 WatchSource:0}: Error finding container 8efe50ff2f65655237cd1366a8e44ae9853ecb34e841c999f896cdadf8ea3a48: Status 404 returned error can't find the container with id 8efe50ff2f65655237cd1366a8e44ae9853ecb34e841c999f896cdadf8ea3a48 Jan 30 14:04:59 crc kubenswrapper[4793]: I0130 14:04:59.978819 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-znzw5"] Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.162232 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn5sc"] Jan 30 14:05:00 crc kubenswrapper[4793]: W0130 14:05:00.172247 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6997fc47_52ce_4421_b8bc_14ad27f1d522.slice/crio-47391653f861372e1e3bd8173c4ee89c976796812daa5ed1004201d7325a8f2f WatchSource:0}: Error finding container 47391653f861372e1e3bd8173c4ee89c976796812daa5ed1004201d7325a8f2f: Status 404 returned error can't find the container with id 47391653f861372e1e3bd8173c4ee89c976796812daa5ed1004201d7325a8f2f Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.239962 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.513966 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" event={"ID":"6997fc47-52ce-4421-b8bc-14ad27f1d522","Type":"ContainerStarted","Data":"47391653f861372e1e3bd8173c4ee89c976796812daa5ed1004201d7325a8f2f"} Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.516235 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e61af9bc-c79d-4e81-a602-37afbdc017a5","Type":"ContainerStarted","Data":"7b7669483d549eb24b141c74941db71192f0f6e724c0813bbeee9ca2352f85e8"} Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.517125 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.519577 4793 generic.go:334] "Generic (PLEG): container finished" podID="085da052-4aff-4c31-a5ac-398194b443a2" containerID="88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c" exitCode=0 Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.519684 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" event={"ID":"085da052-4aff-4c31-a5ac-398194b443a2","Type":"ContainerDied","Data":"88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c"} Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.519715 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" event={"ID":"085da052-4aff-4c31-a5ac-398194b443a2","Type":"ContainerStarted","Data":"8efe50ff2f65655237cd1366a8e44ae9853ecb34e841c999f896cdadf8ea3a48"} Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.533357 4793 generic.go:334] "Generic (PLEG): container finished" podID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" containerID="b2b7d7383e6d798392eb551693b015b04e338eaf766fb65a0aced7e6d9610689" exitCode=0 Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.533454 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" event={"ID":"4ebaeca8-f301-4d75-8691-98415ddcf7e2","Type":"ContainerDied","Data":"b2b7d7383e6d798392eb551693b015b04e338eaf766fb65a0aced7e6d9610689"} Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.534915 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.639576914 podStartE2EDuration="49.534904732s" podCreationTimestamp="2026-01-30 14:04:11 +0000 UTC" firstStartedPulling="2026-01-30 14:04:12.034901976 +0000 UTC m=+1262.736250467" lastFinishedPulling="2026-01-30 14:04:58.930229794 +0000 UTC m=+1309.631578285" observedRunningTime="2026-01-30 14:05:00.533428126 +0000 UTC m=+1311.234776617" watchObservedRunningTime="2026-01-30 14:05:00.534904732 +0000 UTC m=+1311.236253223" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.539526 4793 generic.go:334] "Generic (PLEG): container finished" podID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" containerID="239a19f7152c99455b1d91f01ca7ce00ae83e90bc20fab1b576eaab8c2bb029f" exitCode=0 Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.539595 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" event={"ID":"57f8cfde-399c-43ec-bf72-e96f12a05ae2","Type":"ContainerDied","Data":"239a19f7152c99455b1d91f01ca7ce00ae83e90bc20fab1b576eaab8c2bb029f"} Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.554859 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vx7z5" event={"ID":"2eaf3033-e5f4-48bc-bdee-b7d97e57e765","Type":"ContainerStarted","Data":"f410276e211f4a96a871fa2d9e8b4c4d50ce43f15034df0d2b438a9f073dbdf6"} Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.554898 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vx7z5" event={"ID":"2eaf3033-e5f4-48bc-bdee-b7d97e57e765","Type":"ContainerStarted","Data":"bd9d37bdb1810d24827a2a5ee11a475d8d037c0789fa55b6595cd8fa830b73a8"} Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.583968 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vx7z5" podStartSLOduration=2.5839494739999997 podStartE2EDuration="2.583949474s" podCreationTimestamp="2026-01-30 14:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:00.58375464 +0000 UTC m=+1311.285103131" watchObservedRunningTime="2026-01-30 14:05:00.583949474 +0000 UTC m=+1311.285297965" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.628077 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.651967 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.767347 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-dns-svc\") pod \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.767425 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lhk6\" (UniqueName: \"kubernetes.io/projected/4ebaeca8-f301-4d75-8691-98415ddcf7e2-kube-api-access-7lhk6\") pod \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.767456 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-config\") pod \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.767489 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-config\") pod \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.767558 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-dns-svc\") pod \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\" (UID: \"4ebaeca8-f301-4d75-8691-98415ddcf7e2\") " Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.767665 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw6fw\" (UniqueName: \"kubernetes.io/projected/57f8cfde-399c-43ec-bf72-e96f12a05ae2-kube-api-access-mw6fw\") pod \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\" (UID: \"57f8cfde-399c-43ec-bf72-e96f12a05ae2\") " Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.776285 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ebaeca8-f301-4d75-8691-98415ddcf7e2-kube-api-access-7lhk6" (OuterVolumeSpecName: "kube-api-access-7lhk6") pod "4ebaeca8-f301-4d75-8691-98415ddcf7e2" (UID: "4ebaeca8-f301-4d75-8691-98415ddcf7e2"). InnerVolumeSpecName "kube-api-access-7lhk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.783011 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57f8cfde-399c-43ec-bf72-e96f12a05ae2-kube-api-access-mw6fw" (OuterVolumeSpecName: "kube-api-access-mw6fw") pod "57f8cfde-399c-43ec-bf72-e96f12a05ae2" (UID: "57f8cfde-399c-43ec-bf72-e96f12a05ae2"). InnerVolumeSpecName "kube-api-access-mw6fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.824978 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4ebaeca8-f301-4d75-8691-98415ddcf7e2" (UID: "4ebaeca8-f301-4d75-8691-98415ddcf7e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.837699 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-config" (OuterVolumeSpecName: "config") pod "57f8cfde-399c-43ec-bf72-e96f12a05ae2" (UID: "57f8cfde-399c-43ec-bf72-e96f12a05ae2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.838755 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "57f8cfde-399c-43ec-bf72-e96f12a05ae2" (UID: "57f8cfde-399c-43ec-bf72-e96f12a05ae2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.838770 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-config" (OuterVolumeSpecName: "config") pod "4ebaeca8-f301-4d75-8691-98415ddcf7e2" (UID: "4ebaeca8-f301-4d75-8691-98415ddcf7e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.869709 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.869748 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lhk6\" (UniqueName: \"kubernetes.io/projected/4ebaeca8-f301-4d75-8691-98415ddcf7e2-kube-api-access-7lhk6\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.869758 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.869766 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57f8cfde-399c-43ec-bf72-e96f12a05ae2-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.869773 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ebaeca8-f301-4d75-8691-98415ddcf7e2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.869783 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw6fw\" (UniqueName: \"kubernetes.io/projected/57f8cfde-399c-43ec-bf72-e96f12a05ae2-kube-api-access-mw6fw\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:00 crc kubenswrapper[4793]: I0130 14:05:00.937461 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 14:05:00 crc kubenswrapper[4793]: W0130 14:05:00.955665 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod270527bd_015e_4904_8916_07993e081611.slice/crio-e47fc347968ce0ee2b82515fe6e633960e858ff09d5b117f3981643743bece28 WatchSource:0}: Error finding container e47fc347968ce0ee2b82515fe6e633960e858ff09d5b117f3981643743bece28: Status 404 returned error can't find the container with id e47fc347968ce0ee2b82515fe6e633960e858ff09d5b117f3981643743bece28 Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.405571 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-znzw5"] Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.454599 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-tp7zf"] Jan 30 14:05:01 crc kubenswrapper[4793]: E0130 14:05:01.454894 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" containerName="dnsmasq-dns" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.454907 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" containerName="dnsmasq-dns" Jan 30 14:05:01 crc kubenswrapper[4793]: E0130 14:05:01.454917 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" containerName="init" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.454923 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" containerName="init" Jan 30 14:05:01 crc kubenswrapper[4793]: E0130 14:05:01.454954 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" containerName="init" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.454960 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" containerName="init" Jan 30 14:05:01 crc kubenswrapper[4793]: E0130 14:05:01.454971 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" containerName="dnsmasq-dns" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.454977 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" containerName="dnsmasq-dns" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.455131 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" containerName="dnsmasq-dns" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.455149 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" containerName="dnsmasq-dns" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.455925 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.498571 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tp7zf"] Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.512226 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-config\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.512295 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.512333 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.512387 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-dns-svc\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.512466 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74tsm\" (UniqueName: \"kubernetes.io/projected/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-kube-api-access-74tsm\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.571987 4793 generic.go:334] "Generic (PLEG): container finished" podID="6997fc47-52ce-4421-b8bc-14ad27f1d522" containerID="dc354132d0a6cd02111dfdce273ff0e36cd8eedf4408a97ce6c6cb48e38782b8" exitCode=0 Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.572077 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" event={"ID":"6997fc47-52ce-4421-b8bc-14ad27f1d522","Type":"ContainerDied","Data":"dc354132d0a6cd02111dfdce273ff0e36cd8eedf4408a97ce6c6cb48e38782b8"} Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.605727 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" event={"ID":"085da052-4aff-4c31-a5ac-398194b443a2","Type":"ContainerStarted","Data":"1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a"} Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.606918 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.613726 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74tsm\" (UniqueName: \"kubernetes.io/projected/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-kube-api-access-74tsm\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.613776 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-config\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.613804 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.613841 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.613897 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-dns-svc\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.614738 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-dns-svc\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.616225 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-config\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.616721 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.617288 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.624032 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" event={"ID":"4ebaeca8-f301-4d75-8691-98415ddcf7e2","Type":"ContainerDied","Data":"a95902e824bd19a3e1746ccd97d0b63e3b3629d4c2754b4eeaeedb289cd0a81a"} Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.624114 4793 scope.go:117] "RemoveContainer" containerID="b2b7d7383e6d798392eb551693b015b04e338eaf766fb65a0aced7e6d9610689" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.624240 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vfvss" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.641471 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" event={"ID":"57f8cfde-399c-43ec-bf72-e96f12a05ae2","Type":"ContainerDied","Data":"b6d25f5f6c7c96e5312511cdf0154bdf3db1eff34982a8bfa221c443bb69496c"} Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.641619 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6twpw" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.641699 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" podStartSLOduration=3.641681555 podStartE2EDuration="3.641681555s" podCreationTimestamp="2026-01-30 14:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:01.635873353 +0000 UTC m=+1312.337221844" watchObservedRunningTime="2026-01-30 14:05:01.641681555 +0000 UTC m=+1312.343030046" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.648811 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"270527bd-015e-4904-8916-07993e081611","Type":"ContainerStarted","Data":"e47fc347968ce0ee2b82515fe6e633960e858ff09d5b117f3981643743bece28"} Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.658613 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74tsm\" (UniqueName: \"kubernetes.io/projected/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-kube-api-access-74tsm\") pod \"dnsmasq-dns-698758b865-tp7zf\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.738416 4793 scope.go:117] "RemoveContainer" containerID="3f70174b11e96cdd2d573d9ee24e4219762e2a0529f8d646d037440b2831590b" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.757800 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6twpw"] Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.771890 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6twpw"] Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.780253 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vfvss"] Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.786876 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vfvss"] Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.789077 4793 scope.go:117] "RemoveContainer" containerID="239a19f7152c99455b1d91f01ca7ce00ae83e90bc20fab1b576eaab8c2bb029f" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.808659 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:01 crc kubenswrapper[4793]: I0130 14:05:01.854961 4793 scope.go:117] "RemoveContainer" containerID="e55e6db12bc091de69952e0e4d9fe2c04ddaa0a5ca5e5c173912be87073539b1" Jan 30 14:05:01 crc kubenswrapper[4793]: E0130 14:05:01.877585 4793 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 30 14:05:01 crc kubenswrapper[4793]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/6997fc47-52ce-4421-b8bc-14ad27f1d522/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 14:05:01 crc kubenswrapper[4793]: > podSandboxID="47391653f861372e1e3bd8173c4ee89c976796812daa5ed1004201d7325a8f2f" Jan 30 14:05:01 crc kubenswrapper[4793]: E0130 14:05:01.877795 4793 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 14:05:01 crc kubenswrapper[4793]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h5cbh7ch5d4h66fh676hdbh546h95h88h5ffh55ch7fhch57ch687hddhc7h5fdh57dh674h56fh64ch98h9bh557h55dh646h54ch54fh5c4h597q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnxfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86db49b7ff-jn5sc_openstack(6997fc47-52ce-4421-b8bc-14ad27f1d522): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/6997fc47-52ce-4421-b8bc-14ad27f1d522/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 14:05:01 crc kubenswrapper[4793]: > logger="UnhandledError" Jan 30 14:05:01 crc kubenswrapper[4793]: E0130 14:05:01.879957 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/6997fc47-52ce-4421-b8bc-14ad27f1d522/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" podUID="6997fc47-52ce-4421-b8bc-14ad27f1d522" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.322810 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tp7zf"] Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.412097 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ebaeca8-f301-4d75-8691-98415ddcf7e2" path="/var/lib/kubelet/pods/4ebaeca8-f301-4d75-8691-98415ddcf7e2/volumes" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.413546 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57f8cfde-399c-43ec-bf72-e96f12a05ae2" path="/var/lib/kubelet/pods/57f8cfde-399c-43ec-bf72-e96f12a05ae2/volumes" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.631005 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.640938 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.643123 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.643564 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.650169 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-vvrcq" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.657909 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.658487 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tp7zf" event={"ID":"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1","Type":"ContainerStarted","Data":"d2be4624f88c54b308ce347e2279d0b4015189b7a8bfe3be6bc12fc678ca01b1"} Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.658867 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tp7zf" event={"ID":"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1","Type":"ContainerStarted","Data":"d3a25e8a3b91c8c4040360de5d0cfe31c348e5b8ddffa9f734cc6f66d6f94231"} Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.660617 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.662413 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" podUID="085da052-4aff-4c31-a5ac-398194b443a2" containerName="dnsmasq-dns" containerID="cri-o://1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a" gracePeriod=10 Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.835097 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dgdw\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-kube-api-access-5dgdw\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.835158 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/76182868-5b55-403e-a2be-0c6879e9a2e0-cache\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.835188 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.835235 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76182868-5b55-403e-a2be-0c6879e9a2e0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.835310 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.835397 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/76182868-5b55-403e-a2be-0c6879e9a2e0-lock\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.936978 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.937102 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/76182868-5b55-403e-a2be-0c6879e9a2e0-lock\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.937177 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dgdw\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-kube-api-access-5dgdw\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.937216 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/76182868-5b55-403e-a2be-0c6879e9a2e0-cache\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.937250 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.937279 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76182868-5b55-403e-a2be-0c6879e9a2e0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.942126 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76182868-5b55-403e-a2be-0c6879e9a2e0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: E0130 14:05:02.942278 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 14:05:02 crc kubenswrapper[4793]: E0130 14:05:02.942294 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 14:05:02 crc kubenswrapper[4793]: E0130 14:05:02.942343 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift podName:76182868-5b55-403e-a2be-0c6879e9a2e0 nodeName:}" failed. No retries permitted until 2026-01-30 14:05:03.442323571 +0000 UTC m=+1314.143672072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift") pod "swift-storage-0" (UID: "76182868-5b55-403e-a2be-0c6879e9a2e0") : configmap "swift-ring-files" not found Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.942917 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/76182868-5b55-403e-a2be-0c6879e9a2e0-cache\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.943180 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.950762 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/76182868-5b55-403e-a2be-0c6879e9a2e0-lock\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.971550 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dgdw\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-kube-api-access-5dgdw\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:02 crc kubenswrapper[4793]: I0130 14:05:02.979339 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.024233 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.154421 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.204670 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.352209 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-dns-svc\") pod \"085da052-4aff-4c31-a5ac-398194b443a2\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.352288 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-ovsdbserver-sb\") pod \"085da052-4aff-4c31-a5ac-398194b443a2\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.352385 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-config\") pod \"085da052-4aff-4c31-a5ac-398194b443a2\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.352523 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8tks\" (UniqueName: \"kubernetes.io/projected/085da052-4aff-4c31-a5ac-398194b443a2-kube-api-access-h8tks\") pod \"085da052-4aff-4c31-a5ac-398194b443a2\" (UID: \"085da052-4aff-4c31-a5ac-398194b443a2\") " Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.355454 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085da052-4aff-4c31-a5ac-398194b443a2-kube-api-access-h8tks" (OuterVolumeSpecName: "kube-api-access-h8tks") pod "085da052-4aff-4c31-a5ac-398194b443a2" (UID: "085da052-4aff-4c31-a5ac-398194b443a2"). InnerVolumeSpecName "kube-api-access-h8tks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.398448 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "085da052-4aff-4c31-a5ac-398194b443a2" (UID: "085da052-4aff-4c31-a5ac-398194b443a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.406643 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "085da052-4aff-4c31-a5ac-398194b443a2" (UID: "085da052-4aff-4c31-a5ac-398194b443a2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.407019 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-config" (OuterVolumeSpecName: "config") pod "085da052-4aff-4c31-a5ac-398194b443a2" (UID: "085da052-4aff-4c31-a5ac-398194b443a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.454879 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.455007 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.455019 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8tks\" (UniqueName: \"kubernetes.io/projected/085da052-4aff-4c31-a5ac-398194b443a2-kube-api-access-h8tks\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.455029 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.455037 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/085da052-4aff-4c31-a5ac-398194b443a2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:03 crc kubenswrapper[4793]: E0130 14:05:03.455188 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 14:05:03 crc kubenswrapper[4793]: E0130 14:05:03.455200 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 14:05:03 crc kubenswrapper[4793]: E0130 14:05:03.455242 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift podName:76182868-5b55-403e-a2be-0c6879e9a2e0 nodeName:}" failed. No retries permitted until 2026-01-30 14:05:04.455228065 +0000 UTC m=+1315.156576556 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift") pod "swift-storage-0" (UID: "76182868-5b55-403e-a2be-0c6879e9a2e0") : configmap "swift-ring-files" not found Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.670626 4793 generic.go:334] "Generic (PLEG): container finished" podID="085da052-4aff-4c31-a5ac-398194b443a2" containerID="1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a" exitCode=0 Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.670816 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" event={"ID":"085da052-4aff-4c31-a5ac-398194b443a2","Type":"ContainerDied","Data":"1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a"} Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.671059 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" event={"ID":"085da052-4aff-4c31-a5ac-398194b443a2","Type":"ContainerDied","Data":"8efe50ff2f65655237cd1366a8e44ae9853ecb34e841c999f896cdadf8ea3a48"} Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.671083 4793 scope.go:117] "RemoveContainer" containerID="1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.670884 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-znzw5" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.672883 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"270527bd-015e-4904-8916-07993e081611","Type":"ContainerStarted","Data":"948b5e724679b27c5ada2e3f8910371798d67929a4b80ce0d2918a8a15b29f5a"} Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.672906 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"270527bd-015e-4904-8916-07993e081611","Type":"ContainerStarted","Data":"59484b445fb7c7331b9d0dae505879134106f5a9ba82505de133080004eaa949"} Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.672964 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.678294 4793 generic.go:334] "Generic (PLEG): container finished" podID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerID="d2be4624f88c54b308ce347e2279d0b4015189b7a8bfe3be6bc12fc678ca01b1" exitCode=0 Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.678370 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tp7zf" event={"ID":"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1","Type":"ContainerDied","Data":"d2be4624f88c54b308ce347e2279d0b4015189b7a8bfe3be6bc12fc678ca01b1"} Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.680412 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" event={"ID":"6997fc47-52ce-4421-b8bc-14ad27f1d522","Type":"ContainerStarted","Data":"3e1ef38e5cfd835a2baa7a28e840d23b75da33fc0616ea9a4ca3947c32a19262"} Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.680823 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.691075 4793 scope.go:117] "RemoveContainer" containerID="88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.715399 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.275388771 podStartE2EDuration="4.715373473s" podCreationTimestamp="2026-01-30 14:04:59 +0000 UTC" firstStartedPulling="2026-01-30 14:05:00.960845304 +0000 UTC m=+1311.662193795" lastFinishedPulling="2026-01-30 14:05:02.400830006 +0000 UTC m=+1313.102178497" observedRunningTime="2026-01-30 14:05:03.701439282 +0000 UTC m=+1314.402787773" watchObservedRunningTime="2026-01-30 14:05:03.715373473 +0000 UTC m=+1314.416721964" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.719835 4793 scope.go:117] "RemoveContainer" containerID="1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a" Jan 30 14:05:03 crc kubenswrapper[4793]: E0130 14:05:03.720477 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a\": container with ID starting with 1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a not found: ID does not exist" containerID="1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.720511 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a"} err="failed to get container status \"1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a\": rpc error: code = NotFound desc = could not find container \"1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a\": container with ID starting with 1e1749ec8db92eb064ead368f945fbaad846691aecec93107b1900dbff00828a not found: ID does not exist" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.720536 4793 scope.go:117] "RemoveContainer" containerID="88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c" Jan 30 14:05:03 crc kubenswrapper[4793]: E0130 14:05:03.721009 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c\": container with ID starting with 88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c not found: ID does not exist" containerID="88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.721150 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c"} err="failed to get container status \"88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c\": rpc error: code = NotFound desc = could not find container \"88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c\": container with ID starting with 88e6d127a8f61f7881c6b31037df99fb2069b4ee12e85adf4e963f6ecfda2f3c not found: ID does not exist" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.754862 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" podStartSLOduration=4.754846511 podStartE2EDuration="4.754846511s" podCreationTimestamp="2026-01-30 14:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:03.744321383 +0000 UTC m=+1314.445669874" watchObservedRunningTime="2026-01-30 14:05:03.754846511 +0000 UTC m=+1314.456195002" Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.769739 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-znzw5"] Jan 30 14:05:03 crc kubenswrapper[4793]: I0130 14:05:03.777188 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-znzw5"] Jan 30 14:05:04 crc kubenswrapper[4793]: I0130 14:05:04.407860 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="085da052-4aff-4c31-a5ac-398194b443a2" path="/var/lib/kubelet/pods/085da052-4aff-4c31-a5ac-398194b443a2/volumes" Jan 30 14:05:04 crc kubenswrapper[4793]: I0130 14:05:04.472210 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:04 crc kubenswrapper[4793]: E0130 14:05:04.472454 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 14:05:04 crc kubenswrapper[4793]: E0130 14:05:04.472493 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 14:05:04 crc kubenswrapper[4793]: E0130 14:05:04.472562 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift podName:76182868-5b55-403e-a2be-0c6879e9a2e0 nodeName:}" failed. No retries permitted until 2026-01-30 14:05:06.472540556 +0000 UTC m=+1317.173889067 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift") pod "swift-storage-0" (UID: "76182868-5b55-403e-a2be-0c6879e9a2e0") : configmap "swift-ring-files" not found Jan 30 14:05:04 crc kubenswrapper[4793]: I0130 14:05:04.691779 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tp7zf" event={"ID":"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1","Type":"ContainerStarted","Data":"610455f7ee877cbfe48a7dcf3922577b44a3ba262f3673e879a83bee7f9c298d"} Jan 30 14:05:04 crc kubenswrapper[4793]: I0130 14:05:04.692891 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:04 crc kubenswrapper[4793]: I0130 14:05:04.714435 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-tp7zf" podStartSLOduration=3.714416636 podStartE2EDuration="3.714416636s" podCreationTimestamp="2026-01-30 14:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:04.709729701 +0000 UTC m=+1315.411078192" watchObservedRunningTime="2026-01-30 14:05:04.714416636 +0000 UTC m=+1315.415765127" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.474801 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-q459t"] Jan 30 14:05:06 crc kubenswrapper[4793]: E0130 14:05:06.475421 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="085da052-4aff-4c31-a5ac-398194b443a2" containerName="dnsmasq-dns" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.475436 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="085da052-4aff-4c31-a5ac-398194b443a2" containerName="dnsmasq-dns" Jan 30 14:05:06 crc kubenswrapper[4793]: E0130 14:05:06.475452 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="085da052-4aff-4c31-a5ac-398194b443a2" containerName="init" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.475458 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="085da052-4aff-4c31-a5ac-398194b443a2" containerName="init" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.475612 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="085da052-4aff-4c31-a5ac-398194b443a2" containerName="dnsmasq-dns" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.476135 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.478274 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.479274 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.479588 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.493409 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-q459t"] Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.518937 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:06 crc kubenswrapper[4793]: E0130 14:05:06.519535 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 14:05:06 crc kubenswrapper[4793]: E0130 14:05:06.519557 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 14:05:06 crc kubenswrapper[4793]: E0130 14:05:06.519602 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift podName:76182868-5b55-403e-a2be-0c6879e9a2e0 nodeName:}" failed. No retries permitted until 2026-01-30 14:05:10.519586921 +0000 UTC m=+1321.220935412 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift") pod "swift-storage-0" (UID: "76182868-5b55-403e-a2be-0c6879e9a2e0") : configmap "swift-ring-files" not found Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.620615 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-swiftconf\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.620658 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-ring-data-devices\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.620675 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/50011731-846f-4e86-8664-f9c797dc64ed-etc-swift\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.620696 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4s46\" (UniqueName: \"kubernetes.io/projected/50011731-846f-4e86-8664-f9c797dc64ed-kube-api-access-h4s46\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.620958 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-dispersionconf\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.621074 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-scripts\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.621179 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-combined-ca-bundle\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.729957 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-swiftconf\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.730254 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-ring-data-devices\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.730363 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/50011731-846f-4e86-8664-f9c797dc64ed-etc-swift\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.730456 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4s46\" (UniqueName: \"kubernetes.io/projected/50011731-846f-4e86-8664-f9c797dc64ed-kube-api-access-h4s46\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.730638 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-dispersionconf\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.730842 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-scripts\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.730991 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-combined-ca-bundle\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.730881 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/50011731-846f-4e86-8664-f9c797dc64ed-etc-swift\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.731377 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-ring-data-devices\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.731709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-scripts\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.736194 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-swiftconf\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.739760 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-combined-ca-bundle\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.750027 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-dispersionconf\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.756763 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4s46\" (UniqueName: \"kubernetes.io/projected/50011731-846f-4e86-8664-f9c797dc64ed-kube-api-access-h4s46\") pod \"swift-ring-rebalance-q459t\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:06 crc kubenswrapper[4793]: I0130 14:05:06.793847 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.079375 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-x9wgt"] Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.080588 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.082693 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.087828 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-x9wgt"] Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.240137 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr5j8\" (UniqueName: \"kubernetes.io/projected/1fd3bf73-817a-402e-866c-8a91e0bc2428-kube-api-access-sr5j8\") pod \"root-account-create-update-x9wgt\" (UID: \"1fd3bf73-817a-402e-866c-8a91e0bc2428\") " pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.240199 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd3bf73-817a-402e-866c-8a91e0bc2428-operator-scripts\") pod \"root-account-create-update-x9wgt\" (UID: \"1fd3bf73-817a-402e-866c-8a91e0bc2428\") " pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.254907 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-q459t"] Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.341334 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr5j8\" (UniqueName: \"kubernetes.io/projected/1fd3bf73-817a-402e-866c-8a91e0bc2428-kube-api-access-sr5j8\") pod \"root-account-create-update-x9wgt\" (UID: \"1fd3bf73-817a-402e-866c-8a91e0bc2428\") " pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.341397 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd3bf73-817a-402e-866c-8a91e0bc2428-operator-scripts\") pod \"root-account-create-update-x9wgt\" (UID: \"1fd3bf73-817a-402e-866c-8a91e0bc2428\") " pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.342211 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd3bf73-817a-402e-866c-8a91e0bc2428-operator-scripts\") pod \"root-account-create-update-x9wgt\" (UID: \"1fd3bf73-817a-402e-866c-8a91e0bc2428\") " pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.364075 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr5j8\" (UniqueName: \"kubernetes.io/projected/1fd3bf73-817a-402e-866c-8a91e0bc2428-kube-api-access-sr5j8\") pod \"root-account-create-update-x9wgt\" (UID: \"1fd3bf73-817a-402e-866c-8a91e0bc2428\") " pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.403616 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.467521 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.468785 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.563708 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.720256 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-q459t" event={"ID":"50011731-846f-4e86-8664-f9c797dc64ed","Type":"ContainerStarted","Data":"dfcd68a21a6ccc777d3dfdabb9d0541bc18ef4395d6201dad4b19a23446f3679"} Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.855545 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 14:05:07 crc kubenswrapper[4793]: I0130 14:05:07.892597 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-x9wgt"] Jan 30 14:05:08 crc kubenswrapper[4793]: I0130 14:05:08.734323 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x9wgt" event={"ID":"1fd3bf73-817a-402e-866c-8a91e0bc2428","Type":"ContainerStarted","Data":"d6ac5e8cc6b63af60a4456f31c6bd2647365686983f5e5af22d83b768d333382"} Jan 30 14:05:08 crc kubenswrapper[4793]: I0130 14:05:08.734400 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x9wgt" event={"ID":"1fd3bf73-817a-402e-866c-8a91e0bc2428","Type":"ContainerStarted","Data":"ea2f9a9f4498165ce27de35a5cb85dff750b4522c42a1e477432a11404a3b30e"} Jan 30 14:05:08 crc kubenswrapper[4793]: I0130 14:05:08.760348 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-x9wgt" podStartSLOduration=1.760331045 podStartE2EDuration="1.760331045s" podCreationTimestamp="2026-01-30 14:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:08.754984415 +0000 UTC m=+1319.456332946" watchObservedRunningTime="2026-01-30 14:05:08.760331045 +0000 UTC m=+1319.461679536" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.474670 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-8pwcc"] Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.476132 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.480845 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8pwcc"] Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.546704 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ff11-account-create-update-p5nhq"] Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.547825 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.551013 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.559555 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ff11-account-create-update-p5nhq"] Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.584370 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98986ea8-62f3-4716-9451-0e13567ec2a1-operator-scripts\") pod \"glance-db-create-8pwcc\" (UID: \"98986ea8-62f3-4716-9451-0e13567ec2a1\") " pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.584442 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv8gx\" (UniqueName: \"kubernetes.io/projected/98986ea8-62f3-4716-9451-0e13567ec2a1-kube-api-access-bv8gx\") pod \"glance-db-create-8pwcc\" (UID: \"98986ea8-62f3-4716-9451-0e13567ec2a1\") " pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.590221 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.685964 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81f2e71-1a70-491f-ba0c-ad1a456345c8-operator-scripts\") pod \"glance-ff11-account-create-update-p5nhq\" (UID: \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\") " pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.686020 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98986ea8-62f3-4716-9451-0e13567ec2a1-operator-scripts\") pod \"glance-db-create-8pwcc\" (UID: \"98986ea8-62f3-4716-9451-0e13567ec2a1\") " pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.686209 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm626\" (UniqueName: \"kubernetes.io/projected/f81f2e71-1a70-491f-ba0c-ad1a456345c8-kube-api-access-vm626\") pod \"glance-ff11-account-create-update-p5nhq\" (UID: \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\") " pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.686297 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv8gx\" (UniqueName: \"kubernetes.io/projected/98986ea8-62f3-4716-9451-0e13567ec2a1-kube-api-access-bv8gx\") pod \"glance-db-create-8pwcc\" (UID: \"98986ea8-62f3-4716-9451-0e13567ec2a1\") " pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.688286 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98986ea8-62f3-4716-9451-0e13567ec2a1-operator-scripts\") pod \"glance-db-create-8pwcc\" (UID: \"98986ea8-62f3-4716-9451-0e13567ec2a1\") " pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.732692 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv8gx\" (UniqueName: \"kubernetes.io/projected/98986ea8-62f3-4716-9451-0e13567ec2a1-kube-api-access-bv8gx\") pod \"glance-db-create-8pwcc\" (UID: \"98986ea8-62f3-4716-9451-0e13567ec2a1\") " pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.752076 4793 generic.go:334] "Generic (PLEG): container finished" podID="1fd3bf73-817a-402e-866c-8a91e0bc2428" containerID="d6ac5e8cc6b63af60a4456f31c6bd2647365686983f5e5af22d83b768d333382" exitCode=0 Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.753291 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x9wgt" event={"ID":"1fd3bf73-817a-402e-866c-8a91e0bc2428","Type":"ContainerDied","Data":"d6ac5e8cc6b63af60a4456f31c6bd2647365686983f5e5af22d83b768d333382"} Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.788413 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81f2e71-1a70-491f-ba0c-ad1a456345c8-operator-scripts\") pod \"glance-ff11-account-create-update-p5nhq\" (UID: \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\") " pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.788511 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm626\" (UniqueName: \"kubernetes.io/projected/f81f2e71-1a70-491f-ba0c-ad1a456345c8-kube-api-access-vm626\") pod \"glance-ff11-account-create-update-p5nhq\" (UID: \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\") " pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.789671 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81f2e71-1a70-491f-ba0c-ad1a456345c8-operator-scripts\") pod \"glance-ff11-account-create-update-p5nhq\" (UID: \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\") " pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.796381 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.814649 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm626\" (UniqueName: \"kubernetes.io/projected/f81f2e71-1a70-491f-ba0c-ad1a456345c8-kube-api-access-vm626\") pod \"glance-ff11-account-create-update-p5nhq\" (UID: \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\") " pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.848655 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-tq6pw"] Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.849750 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.857028 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-tq6pw"] Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.866436 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.947374 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-22a6-account-create-update-59kzd"] Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.948375 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.952288 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.996632 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr8tg\" (UniqueName: \"kubernetes.io/projected/b3f03641-1e63-4c88-a1f4-f58cf0d81883-kube-api-access-pr8tg\") pod \"keystone-db-create-tq6pw\" (UID: \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\") " pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:09 crc kubenswrapper[4793]: I0130 14:05:09.996711 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3f03641-1e63-4c88-a1f4-f58cf0d81883-operator-scripts\") pod \"keystone-db-create-tq6pw\" (UID: \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\") " pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.013211 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-22a6-account-create-update-59kzd"] Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.098661 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5gw6\" (UniqueName: \"kubernetes.io/projected/563516b7-0256-4c05-b1d1-3aa03d692afb-kube-api-access-t5gw6\") pod \"keystone-22a6-account-create-update-59kzd\" (UID: \"563516b7-0256-4c05-b1d1-3aa03d692afb\") " pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.098714 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr8tg\" (UniqueName: \"kubernetes.io/projected/b3f03641-1e63-4c88-a1f4-f58cf0d81883-kube-api-access-pr8tg\") pod \"keystone-db-create-tq6pw\" (UID: \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\") " pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.098807 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3f03641-1e63-4c88-a1f4-f58cf0d81883-operator-scripts\") pod \"keystone-db-create-tq6pw\" (UID: \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\") " pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.098863 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/563516b7-0256-4c05-b1d1-3aa03d692afb-operator-scripts\") pod \"keystone-22a6-account-create-update-59kzd\" (UID: \"563516b7-0256-4c05-b1d1-3aa03d692afb\") " pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.099631 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3f03641-1e63-4c88-a1f4-f58cf0d81883-operator-scripts\") pod \"keystone-db-create-tq6pw\" (UID: \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\") " pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.117263 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr8tg\" (UniqueName: \"kubernetes.io/projected/b3f03641-1e63-4c88-a1f4-f58cf0d81883-kube-api-access-pr8tg\") pod \"keystone-db-create-tq6pw\" (UID: \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\") " pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.166685 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.200683 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/563516b7-0256-4c05-b1d1-3aa03d692afb-operator-scripts\") pod \"keystone-22a6-account-create-update-59kzd\" (UID: \"563516b7-0256-4c05-b1d1-3aa03d692afb\") " pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.200833 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5gw6\" (UniqueName: \"kubernetes.io/projected/563516b7-0256-4c05-b1d1-3aa03d692afb-kube-api-access-t5gw6\") pod \"keystone-22a6-account-create-update-59kzd\" (UID: \"563516b7-0256-4c05-b1d1-3aa03d692afb\") " pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.201540 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/563516b7-0256-4c05-b1d1-3aa03d692afb-operator-scripts\") pod \"keystone-22a6-account-create-update-59kzd\" (UID: \"563516b7-0256-4c05-b1d1-3aa03d692afb\") " pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.217336 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5gw6\" (UniqueName: \"kubernetes.io/projected/563516b7-0256-4c05-b1d1-3aa03d692afb-kube-api-access-t5gw6\") pod \"keystone-22a6-account-create-update-59kzd\" (UID: \"563516b7-0256-4c05-b1d1-3aa03d692afb\") " pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.305924 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-gbcdm"] Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.307143 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.329704 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.333390 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gbcdm"] Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.395901 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3a9f-account-create-update-zkbvj"] Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.397449 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.405866 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d0f274e-c187-4f1a-aa78-508b1761f9fb-operator-scripts\") pod \"placement-db-create-gbcdm\" (UID: \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\") " pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.406120 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfwd6\" (UniqueName: \"kubernetes.io/projected/6d0f274e-c187-4f1a-aa78-508b1761f9fb-kube-api-access-tfwd6\") pod \"placement-db-create-gbcdm\" (UID: \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\") " pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.410818 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.437866 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3a9f-account-create-update-zkbvj"] Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.508095 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d0f274e-c187-4f1a-aa78-508b1761f9fb-operator-scripts\") pod \"placement-db-create-gbcdm\" (UID: \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\") " pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.508167 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62fbb159-dc72-4c34-b2b7-5be6be4df981-operator-scripts\") pod \"placement-3a9f-account-create-update-zkbvj\" (UID: \"62fbb159-dc72-4c34-b2b7-5be6be4df981\") " pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.508196 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97zzn\" (UniqueName: \"kubernetes.io/projected/62fbb159-dc72-4c34-b2b7-5be6be4df981-kube-api-access-97zzn\") pod \"placement-3a9f-account-create-update-zkbvj\" (UID: \"62fbb159-dc72-4c34-b2b7-5be6be4df981\") " pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.508299 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfwd6\" (UniqueName: \"kubernetes.io/projected/6d0f274e-c187-4f1a-aa78-508b1761f9fb-kube-api-access-tfwd6\") pod \"placement-db-create-gbcdm\" (UID: \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\") " pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.508707 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d0f274e-c187-4f1a-aa78-508b1761f9fb-operator-scripts\") pod \"placement-db-create-gbcdm\" (UID: \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\") " pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.537833 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfwd6\" (UniqueName: \"kubernetes.io/projected/6d0f274e-c187-4f1a-aa78-508b1761f9fb-kube-api-access-tfwd6\") pod \"placement-db-create-gbcdm\" (UID: \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\") " pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.610468 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62fbb159-dc72-4c34-b2b7-5be6be4df981-operator-scripts\") pod \"placement-3a9f-account-create-update-zkbvj\" (UID: \"62fbb159-dc72-4c34-b2b7-5be6be4df981\") " pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.610805 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97zzn\" (UniqueName: \"kubernetes.io/projected/62fbb159-dc72-4c34-b2b7-5be6be4df981-kube-api-access-97zzn\") pod \"placement-3a9f-account-create-update-zkbvj\" (UID: \"62fbb159-dc72-4c34-b2b7-5be6be4df981\") " pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.611039 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.611125 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62fbb159-dc72-4c34-b2b7-5be6be4df981-operator-scripts\") pod \"placement-3a9f-account-create-update-zkbvj\" (UID: \"62fbb159-dc72-4c34-b2b7-5be6be4df981\") " pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:10 crc kubenswrapper[4793]: E0130 14:05:10.611234 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 14:05:10 crc kubenswrapper[4793]: E0130 14:05:10.611410 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 14:05:10 crc kubenswrapper[4793]: E0130 14:05:10.611513 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift podName:76182868-5b55-403e-a2be-0c6879e9a2e0 nodeName:}" failed. No retries permitted until 2026-01-30 14:05:18.611500219 +0000 UTC m=+1329.312848700 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift") pod "swift-storage-0" (UID: "76182868-5b55-403e-a2be-0c6879e9a2e0") : configmap "swift-ring-files" not found Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.627973 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97zzn\" (UniqueName: \"kubernetes.io/projected/62fbb159-dc72-4c34-b2b7-5be6be4df981-kube-api-access-97zzn\") pod \"placement-3a9f-account-create-update-zkbvj\" (UID: \"62fbb159-dc72-4c34-b2b7-5be6be4df981\") " pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.632009 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:10 crc kubenswrapper[4793]: I0130 14:05:10.726315 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:11 crc kubenswrapper[4793]: I0130 14:05:11.388171 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 14:05:11 crc kubenswrapper[4793]: I0130 14:05:11.810433 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:05:11 crc kubenswrapper[4793]: I0130 14:05:11.868121 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn5sc"] Jan 30 14:05:11 crc kubenswrapper[4793]: I0130 14:05:11.868405 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" podUID="6997fc47-52ce-4421-b8bc-14ad27f1d522" containerName="dnsmasq-dns" containerID="cri-o://3e1ef38e5cfd835a2baa7a28e840d23b75da33fc0616ea9a4ca3947c32a19262" gracePeriod=10 Jan 30 14:05:12 crc kubenswrapper[4793]: I0130 14:05:12.784636 4793 generic.go:334] "Generic (PLEG): container finished" podID="6997fc47-52ce-4421-b8bc-14ad27f1d522" containerID="3e1ef38e5cfd835a2baa7a28e840d23b75da33fc0616ea9a4ca3947c32a19262" exitCode=0 Jan 30 14:05:12 crc kubenswrapper[4793]: I0130 14:05:12.784685 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" event={"ID":"6997fc47-52ce-4421-b8bc-14ad27f1d522","Type":"ContainerDied","Data":"3e1ef38e5cfd835a2baa7a28e840d23b75da33fc0616ea9a4ca3947c32a19262"} Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.437475 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.559766 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr5j8\" (UniqueName: \"kubernetes.io/projected/1fd3bf73-817a-402e-866c-8a91e0bc2428-kube-api-access-sr5j8\") pod \"1fd3bf73-817a-402e-866c-8a91e0bc2428\" (UID: \"1fd3bf73-817a-402e-866c-8a91e0bc2428\") " Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.559969 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd3bf73-817a-402e-866c-8a91e0bc2428-operator-scripts\") pod \"1fd3bf73-817a-402e-866c-8a91e0bc2428\" (UID: \"1fd3bf73-817a-402e-866c-8a91e0bc2428\") " Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.561274 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fd3bf73-817a-402e-866c-8a91e0bc2428-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1fd3bf73-817a-402e-866c-8a91e0bc2428" (UID: "1fd3bf73-817a-402e-866c-8a91e0bc2428"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.567759 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd3bf73-817a-402e-866c-8a91e0bc2428-kube-api-access-sr5j8" (OuterVolumeSpecName: "kube-api-access-sr5j8") pod "1fd3bf73-817a-402e-866c-8a91e0bc2428" (UID: "1fd3bf73-817a-402e-866c-8a91e0bc2428"). InnerVolumeSpecName "kube-api-access-sr5j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.662462 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd3bf73-817a-402e-866c-8a91e0bc2428-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.662492 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr5j8\" (UniqueName: \"kubernetes.io/projected/1fd3bf73-817a-402e-866c-8a91e0bc2428-kube-api-access-sr5j8\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.673472 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.765550 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-nb\") pod \"6997fc47-52ce-4421-b8bc-14ad27f1d522\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.765684 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-config\") pod \"6997fc47-52ce-4421-b8bc-14ad27f1d522\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.765710 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnxfv\" (UniqueName: \"kubernetes.io/projected/6997fc47-52ce-4421-b8bc-14ad27f1d522-kube-api-access-vnxfv\") pod \"6997fc47-52ce-4421-b8bc-14ad27f1d522\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.765728 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-sb\") pod \"6997fc47-52ce-4421-b8bc-14ad27f1d522\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.765798 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-dns-svc\") pod \"6997fc47-52ce-4421-b8bc-14ad27f1d522\" (UID: \"6997fc47-52ce-4421-b8bc-14ad27f1d522\") " Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.784245 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6997fc47-52ce-4421-b8bc-14ad27f1d522-kube-api-access-vnxfv" (OuterVolumeSpecName: "kube-api-access-vnxfv") pod "6997fc47-52ce-4421-b8bc-14ad27f1d522" (UID: "6997fc47-52ce-4421-b8bc-14ad27f1d522"). InnerVolumeSpecName "kube-api-access-vnxfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.812146 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" event={"ID":"6997fc47-52ce-4421-b8bc-14ad27f1d522","Type":"ContainerDied","Data":"47391653f861372e1e3bd8173c4ee89c976796812daa5ed1004201d7325a8f2f"} Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.812387 4793 scope.go:117] "RemoveContainer" containerID="3e1ef38e5cfd835a2baa7a28e840d23b75da33fc0616ea9a4ca3947c32a19262" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.812517 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jn5sc" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.818261 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x9wgt" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.817976 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x9wgt" event={"ID":"1fd3bf73-817a-402e-866c-8a91e0bc2428","Type":"ContainerDied","Data":"ea2f9a9f4498165ce27de35a5cb85dff750b4522c42a1e477432a11404a3b30e"} Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.818466 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea2f9a9f4498165ce27de35a5cb85dff750b4522c42a1e477432a11404a3b30e" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.835289 4793 scope.go:117] "RemoveContainer" containerID="dc354132d0a6cd02111dfdce273ff0e36cd8eedf4408a97ce6c6cb48e38782b8" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.836441 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6997fc47-52ce-4421-b8bc-14ad27f1d522" (UID: "6997fc47-52ce-4421-b8bc-14ad27f1d522"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.847363 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6997fc47-52ce-4421-b8bc-14ad27f1d522" (UID: "6997fc47-52ce-4421-b8bc-14ad27f1d522"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.863495 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6997fc47-52ce-4421-b8bc-14ad27f1d522" (UID: "6997fc47-52ce-4421-b8bc-14ad27f1d522"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.863622 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-config" (OuterVolumeSpecName: "config") pod "6997fc47-52ce-4421-b8bc-14ad27f1d522" (UID: "6997fc47-52ce-4421-b8bc-14ad27f1d522"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.867952 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.868101 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.868176 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.868265 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnxfv\" (UniqueName: \"kubernetes.io/projected/6997fc47-52ce-4421-b8bc-14ad27f1d522-kube-api-access-vnxfv\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.868337 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6997fc47-52ce-4421-b8bc-14ad27f1d522-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:13 crc kubenswrapper[4793]: W0130 14:05:13.914097 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d0f274e_c187_4f1a_aa78_508b1761f9fb.slice/crio-1039ce097a065ceb7f6cbd6b3b5d6e73401a103ef33341c42a54ecdb3c2e9be8 WatchSource:0}: Error finding container 1039ce097a065ceb7f6cbd6b3b5d6e73401a103ef33341c42a54ecdb3c2e9be8: Status 404 returned error can't find the container with id 1039ce097a065ceb7f6cbd6b3b5d6e73401a103ef33341c42a54ecdb3c2e9be8 Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.926917 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gbcdm"] Jan 30 14:05:13 crc kubenswrapper[4793]: I0130 14:05:13.926964 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8pwcc"] Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.059207 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-tq6pw"] Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.081136 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3a9f-account-create-update-zkbvj"] Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.087498 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ff11-account-create-update-p5nhq"] Jan 30 14:05:14 crc kubenswrapper[4793]: W0130 14:05:14.098175 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62fbb159_dc72_4c34_b2b7_5be6be4df981.slice/crio-cdab6e776d028e9251c9333022bcb3bff90331c0dec32cedbd959678ebc24028 WatchSource:0}: Error finding container cdab6e776d028e9251c9333022bcb3bff90331c0dec32cedbd959678ebc24028: Status 404 returned error can't find the container with id cdab6e776d028e9251c9333022bcb3bff90331c0dec32cedbd959678ebc24028 Jan 30 14:05:14 crc kubenswrapper[4793]: W0130 14:05:14.100468 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf81f2e71_1a70_491f_ba0c_ad1a456345c8.slice/crio-1635e22d747e1e9ecdb13fd83e4f66247ad344b78ffe852aa12ec1f91c0d069e WatchSource:0}: Error finding container 1635e22d747e1e9ecdb13fd83e4f66247ad344b78ffe852aa12ec1f91c0d069e: Status 404 returned error can't find the container with id 1635e22d747e1e9ecdb13fd83e4f66247ad344b78ffe852aa12ec1f91c0d069e Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.127655 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.150181 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn5sc"] Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.155837 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jn5sc"] Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.211253 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-22a6-account-create-update-59kzd"] Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.240501 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.408400 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6997fc47-52ce-4421-b8bc-14ad27f1d522" path="/var/lib/kubelet/pods/6997fc47-52ce-4421-b8bc-14ad27f1d522/volumes" Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.826233 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ff11-account-create-update-p5nhq" event={"ID":"f81f2e71-1a70-491f-ba0c-ad1a456345c8","Type":"ContainerStarted","Data":"43a04a7b0ede88204c3ce58512e165ac71ea34ba165695393273ca8c2ab37053"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.826558 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ff11-account-create-update-p5nhq" event={"ID":"f81f2e71-1a70-491f-ba0c-ad1a456345c8","Type":"ContainerStarted","Data":"1635e22d747e1e9ecdb13fd83e4f66247ad344b78ffe852aa12ec1f91c0d069e"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.828954 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-q459t" event={"ID":"50011731-846f-4e86-8664-f9c797dc64ed","Type":"ContainerStarted","Data":"a1b8fa0ad1007024e2a758d432cfe8f804db4960d86814b080a404a5d1c5e7dd"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.831642 4793 generic.go:334] "Generic (PLEG): container finished" podID="b3f03641-1e63-4c88-a1f4-f58cf0d81883" containerID="3efaeb1f3745caf5c2ff18e628906fd2ae05a6952ec9376aacd048e2c31a3cdb" exitCode=0 Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.831704 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tq6pw" event={"ID":"b3f03641-1e63-4c88-a1f4-f58cf0d81883","Type":"ContainerDied","Data":"3efaeb1f3745caf5c2ff18e628906fd2ae05a6952ec9376aacd048e2c31a3cdb"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.831725 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tq6pw" event={"ID":"b3f03641-1e63-4c88-a1f4-f58cf0d81883","Type":"ContainerStarted","Data":"a9e447eeda31cacf6f4b15b396de8b08fe6fa521839c2bcdccd64834364aae1e"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.833757 4793 generic.go:334] "Generic (PLEG): container finished" podID="98986ea8-62f3-4716-9451-0e13567ec2a1" containerID="2bc34dab4f37d7b6429a87926db0d3a5178ff268821d2ee975bfe47cb007e77b" exitCode=0 Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.833811 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8pwcc" event={"ID":"98986ea8-62f3-4716-9451-0e13567ec2a1","Type":"ContainerDied","Data":"2bc34dab4f37d7b6429a87926db0d3a5178ff268821d2ee975bfe47cb007e77b"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.833830 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8pwcc" event={"ID":"98986ea8-62f3-4716-9451-0e13567ec2a1","Type":"ContainerStarted","Data":"cb09760039a9112dfda2f514c6cc6d916cb55c3c695ec127a1cd6546c15b55a8"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.835479 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3a9f-account-create-update-zkbvj" event={"ID":"62fbb159-dc72-4c34-b2b7-5be6be4df981","Type":"ContainerStarted","Data":"792c9fae56b3faf29df0bfe7bb192d950ab990e8d21594ce52765083cb10c12e"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.835505 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3a9f-account-create-update-zkbvj" event={"ID":"62fbb159-dc72-4c34-b2b7-5be6be4df981","Type":"ContainerStarted","Data":"cdab6e776d028e9251c9333022bcb3bff90331c0dec32cedbd959678ebc24028"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.838004 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-22a6-account-create-update-59kzd" event={"ID":"563516b7-0256-4c05-b1d1-3aa03d692afb","Type":"ContainerStarted","Data":"e2ff0ec9f064c9873b71344fa59a44b2ef666d7ccd24dbe878aa2ede8a23585c"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.838030 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-22a6-account-create-update-59kzd" event={"ID":"563516b7-0256-4c05-b1d1-3aa03d692afb","Type":"ContainerStarted","Data":"7d69a7884cd7efe94de2ea93b06606bf6e99299116b61e5a4762af1a31d75436"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.842178 4793 generic.go:334] "Generic (PLEG): container finished" podID="6d0f274e-c187-4f1a-aa78-508b1761f9fb" containerID="e076400efeb8dc1f3b157eb928b1925e404de84a86497e6441e959675b9ddf99" exitCode=0 Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.842267 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gbcdm" event={"ID":"6d0f274e-c187-4f1a-aa78-508b1761f9fb","Type":"ContainerDied","Data":"e076400efeb8dc1f3b157eb928b1925e404de84a86497e6441e959675b9ddf99"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.842292 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gbcdm" event={"ID":"6d0f274e-c187-4f1a-aa78-508b1761f9fb","Type":"ContainerStarted","Data":"1039ce097a065ceb7f6cbd6b3b5d6e73401a103ef33341c42a54ecdb3c2e9be8"} Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.874124 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-ff11-account-create-update-p5nhq" podStartSLOduration=5.874111 podStartE2EDuration="5.874111s" podCreationTimestamp="2026-01-30 14:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:14.848440201 +0000 UTC m=+1325.549788702" watchObservedRunningTime="2026-01-30 14:05:14.874111 +0000 UTC m=+1325.575459491" Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.892354 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-q459t" podStartSLOduration=2.5925016100000002 podStartE2EDuration="8.892337757s" podCreationTimestamp="2026-01-30 14:05:06 +0000 UTC" firstStartedPulling="2026-01-30 14:05:07.247913327 +0000 UTC m=+1317.949261808" lastFinishedPulling="2026-01-30 14:05:13.547749474 +0000 UTC m=+1324.249097955" observedRunningTime="2026-01-30 14:05:14.871572218 +0000 UTC m=+1325.572920709" watchObservedRunningTime="2026-01-30 14:05:14.892337757 +0000 UTC m=+1325.593686248" Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.935063 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-3a9f-account-create-update-zkbvj" podStartSLOduration=4.935031743 podStartE2EDuration="4.935031743s" podCreationTimestamp="2026-01-30 14:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:14.89123395 +0000 UTC m=+1325.592582441" watchObservedRunningTime="2026-01-30 14:05:14.935031743 +0000 UTC m=+1325.636380234" Jan 30 14:05:14 crc kubenswrapper[4793]: I0130 14:05:14.968886 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-22a6-account-create-update-59kzd" podStartSLOduration=5.968845433 podStartE2EDuration="5.968845433s" podCreationTimestamp="2026-01-30 14:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:14.964471725 +0000 UTC m=+1325.665820216" watchObservedRunningTime="2026-01-30 14:05:14.968845433 +0000 UTC m=+1325.670193924" Jan 30 14:05:15 crc kubenswrapper[4793]: I0130 14:05:15.853879 4793 generic.go:334] "Generic (PLEG): container finished" podID="62fbb159-dc72-4c34-b2b7-5be6be4df981" containerID="792c9fae56b3faf29df0bfe7bb192d950ab990e8d21594ce52765083cb10c12e" exitCode=0 Jan 30 14:05:15 crc kubenswrapper[4793]: I0130 14:05:15.854360 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3a9f-account-create-update-zkbvj" event={"ID":"62fbb159-dc72-4c34-b2b7-5be6be4df981","Type":"ContainerDied","Data":"792c9fae56b3faf29df0bfe7bb192d950ab990e8d21594ce52765083cb10c12e"} Jan 30 14:05:15 crc kubenswrapper[4793]: I0130 14:05:15.856201 4793 generic.go:334] "Generic (PLEG): container finished" podID="563516b7-0256-4c05-b1d1-3aa03d692afb" containerID="e2ff0ec9f064c9873b71344fa59a44b2ef666d7ccd24dbe878aa2ede8a23585c" exitCode=0 Jan 30 14:05:15 crc kubenswrapper[4793]: I0130 14:05:15.856360 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-22a6-account-create-update-59kzd" event={"ID":"563516b7-0256-4c05-b1d1-3aa03d692afb","Type":"ContainerDied","Data":"e2ff0ec9f064c9873b71344fa59a44b2ef666d7ccd24dbe878aa2ede8a23585c"} Jan 30 14:05:15 crc kubenswrapper[4793]: I0130 14:05:15.863505 4793 generic.go:334] "Generic (PLEG): container finished" podID="f81f2e71-1a70-491f-ba0c-ad1a456345c8" containerID="43a04a7b0ede88204c3ce58512e165ac71ea34ba165695393273ca8c2ab37053" exitCode=0 Jan 30 14:05:15 crc kubenswrapper[4793]: I0130 14:05:15.863716 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ff11-account-create-update-p5nhq" event={"ID":"f81f2e71-1a70-491f-ba0c-ad1a456345c8","Type":"ContainerDied","Data":"43a04a7b0ede88204c3ce58512e165ac71ea34ba165695393273ca8c2ab37053"} Jan 30 14:05:15 crc kubenswrapper[4793]: I0130 14:05:15.989356 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-x9wgt"] Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.004436 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-x9wgt"] Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.073396 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-r6w5v"] Jan 30 14:05:16 crc kubenswrapper[4793]: E0130 14:05:16.073758 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd3bf73-817a-402e-866c-8a91e0bc2428" containerName="mariadb-account-create-update" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.073791 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd3bf73-817a-402e-866c-8a91e0bc2428" containerName="mariadb-account-create-update" Jan 30 14:05:16 crc kubenswrapper[4793]: E0130 14:05:16.073812 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6997fc47-52ce-4421-b8bc-14ad27f1d522" containerName="dnsmasq-dns" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.073818 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6997fc47-52ce-4421-b8bc-14ad27f1d522" containerName="dnsmasq-dns" Jan 30 14:05:16 crc kubenswrapper[4793]: E0130 14:05:16.073832 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6997fc47-52ce-4421-b8bc-14ad27f1d522" containerName="init" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.073839 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6997fc47-52ce-4421-b8bc-14ad27f1d522" containerName="init" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.074020 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6997fc47-52ce-4421-b8bc-14ad27f1d522" containerName="dnsmasq-dns" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.074058 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd3bf73-817a-402e-866c-8a91e0bc2428" containerName="mariadb-account-create-update" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.074575 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.077696 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.083682 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-r6w5v"] Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.219769 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c5fc335-85d3-41d9-af0a-d0e3aede352b-operator-scripts\") pod \"root-account-create-update-r6w5v\" (UID: \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\") " pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.223504 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmmjt\" (UniqueName: \"kubernetes.io/projected/8c5fc335-85d3-41d9-af0a-d0e3aede352b-kube-api-access-bmmjt\") pod \"root-account-create-update-r6w5v\" (UID: \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\") " pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.294038 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.325166 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c5fc335-85d3-41d9-af0a-d0e3aede352b-operator-scripts\") pod \"root-account-create-update-r6w5v\" (UID: \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\") " pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.325257 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmmjt\" (UniqueName: \"kubernetes.io/projected/8c5fc335-85d3-41d9-af0a-d0e3aede352b-kube-api-access-bmmjt\") pod \"root-account-create-update-r6w5v\" (UID: \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\") " pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.327862 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c5fc335-85d3-41d9-af0a-d0e3aede352b-operator-scripts\") pod \"root-account-create-update-r6w5v\" (UID: \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\") " pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.369459 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmmjt\" (UniqueName: \"kubernetes.io/projected/8c5fc335-85d3-41d9-af0a-d0e3aede352b-kube-api-access-bmmjt\") pod \"root-account-create-update-r6w5v\" (UID: \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\") " pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.393782 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.415577 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fd3bf73-817a-402e-866c-8a91e0bc2428" path="/var/lib/kubelet/pods/1fd3bf73-817a-402e-866c-8a91e0bc2428/volumes" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.426570 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr8tg\" (UniqueName: \"kubernetes.io/projected/b3f03641-1e63-4c88-a1f4-f58cf0d81883-kube-api-access-pr8tg\") pod \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\" (UID: \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\") " Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.426661 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3f03641-1e63-4c88-a1f4-f58cf0d81883-operator-scripts\") pod \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\" (UID: \"b3f03641-1e63-4c88-a1f4-f58cf0d81883\") " Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.427242 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3f03641-1e63-4c88-a1f4-f58cf0d81883-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3f03641-1e63-4c88-a1f4-f58cf0d81883" (UID: "b3f03641-1e63-4c88-a1f4-f58cf0d81883"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.427859 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3f03641-1e63-4c88-a1f4-f58cf0d81883-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.430613 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f03641-1e63-4c88-a1f4-f58cf0d81883-kube-api-access-pr8tg" (OuterVolumeSpecName: "kube-api-access-pr8tg") pod "b3f03641-1e63-4c88-a1f4-f58cf0d81883" (UID: "b3f03641-1e63-4c88-a1f4-f58cf0d81883"). InnerVolumeSpecName "kube-api-access-pr8tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.463613 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.469498 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.529827 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr8tg\" (UniqueName: \"kubernetes.io/projected/b3f03641-1e63-4c88-a1f4-f58cf0d81883-kube-api-access-pr8tg\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.631233 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98986ea8-62f3-4716-9451-0e13567ec2a1-operator-scripts\") pod \"98986ea8-62f3-4716-9451-0e13567ec2a1\" (UID: \"98986ea8-62f3-4716-9451-0e13567ec2a1\") " Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.631313 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv8gx\" (UniqueName: \"kubernetes.io/projected/98986ea8-62f3-4716-9451-0e13567ec2a1-kube-api-access-bv8gx\") pod \"98986ea8-62f3-4716-9451-0e13567ec2a1\" (UID: \"98986ea8-62f3-4716-9451-0e13567ec2a1\") " Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.631454 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d0f274e-c187-4f1a-aa78-508b1761f9fb-operator-scripts\") pod \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\" (UID: \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\") " Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.631527 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfwd6\" (UniqueName: \"kubernetes.io/projected/6d0f274e-c187-4f1a-aa78-508b1761f9fb-kube-api-access-tfwd6\") pod \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\" (UID: \"6d0f274e-c187-4f1a-aa78-508b1761f9fb\") " Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.632535 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98986ea8-62f3-4716-9451-0e13567ec2a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98986ea8-62f3-4716-9451-0e13567ec2a1" (UID: "98986ea8-62f3-4716-9451-0e13567ec2a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.632680 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d0f274e-c187-4f1a-aa78-508b1761f9fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d0f274e-c187-4f1a-aa78-508b1761f9fb" (UID: "6d0f274e-c187-4f1a-aa78-508b1761f9fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.635305 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d0f274e-c187-4f1a-aa78-508b1761f9fb-kube-api-access-tfwd6" (OuterVolumeSpecName: "kube-api-access-tfwd6") pod "6d0f274e-c187-4f1a-aa78-508b1761f9fb" (UID: "6d0f274e-c187-4f1a-aa78-508b1761f9fb"). InnerVolumeSpecName "kube-api-access-tfwd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.638903 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98986ea8-62f3-4716-9451-0e13567ec2a1-kube-api-access-bv8gx" (OuterVolumeSpecName: "kube-api-access-bv8gx") pod "98986ea8-62f3-4716-9451-0e13567ec2a1" (UID: "98986ea8-62f3-4716-9451-0e13567ec2a1"). InnerVolumeSpecName "kube-api-access-bv8gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.734025 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d0f274e-c187-4f1a-aa78-508b1761f9fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.734227 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfwd6\" (UniqueName: \"kubernetes.io/projected/6d0f274e-c187-4f1a-aa78-508b1761f9fb-kube-api-access-tfwd6\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.734237 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98986ea8-62f3-4716-9451-0e13567ec2a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:16 crc kubenswrapper[4793]: I0130 14:05:16.734248 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv8gx\" (UniqueName: \"kubernetes.io/projected/98986ea8-62f3-4716-9451-0e13567ec2a1-kube-api-access-bv8gx\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.161308 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8pwcc" Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.165150 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8pwcc" event={"ID":"98986ea8-62f3-4716-9451-0e13567ec2a1","Type":"ContainerDied","Data":"cb09760039a9112dfda2f514c6cc6d916cb55c3c695ec127a1cd6546c15b55a8"} Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.165255 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb09760039a9112dfda2f514c6cc6d916cb55c3c695ec127a1cd6546c15b55a8" Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.167545 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tq6pw" Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.167556 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tq6pw" event={"ID":"b3f03641-1e63-4c88-a1f4-f58cf0d81883","Type":"ContainerDied","Data":"a9e447eeda31cacf6f4b15b396de8b08fe6fa521839c2bcdccd64834364aae1e"} Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.167880 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9e447eeda31cacf6f4b15b396de8b08fe6fa521839c2bcdccd64834364aae1e" Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.169568 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gbcdm" Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.172074 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gbcdm" event={"ID":"6d0f274e-c187-4f1a-aa78-508b1761f9fb","Type":"ContainerDied","Data":"1039ce097a065ceb7f6cbd6b3b5d6e73401a103ef33341c42a54ecdb3c2e9be8"} Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.172112 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1039ce097a065ceb7f6cbd6b3b5d6e73401a103ef33341c42a54ecdb3c2e9be8" Jan 30 14:05:17 crc kubenswrapper[4793]: I0130 14:05:17.205490 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-r6w5v"] Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.659006 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.673642 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.673724 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:18 crc kubenswrapper[4793]: E0130 14:05:17.799824 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c5fc335_85d3_41d9_af0a_d0e3aede352b.slice/crio-conmon-0a03fc4fb64bbc55f9e83e2df3c5192020b95575ac83335c13e52269467122b8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c5fc335_85d3_41d9_af0a_d0e3aede352b.slice/crio-0a03fc4fb64bbc55f9e83e2df3c5192020b95575ac83335c13e52269467122b8.scope\": RecentStats: unable to find data in memory cache]" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.850662 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97zzn\" (UniqueName: \"kubernetes.io/projected/62fbb159-dc72-4c34-b2b7-5be6be4df981-kube-api-access-97zzn\") pod \"62fbb159-dc72-4c34-b2b7-5be6be4df981\" (UID: \"62fbb159-dc72-4c34-b2b7-5be6be4df981\") " Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.850746 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81f2e71-1a70-491f-ba0c-ad1a456345c8-operator-scripts\") pod \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\" (UID: \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\") " Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.850793 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm626\" (UniqueName: \"kubernetes.io/projected/f81f2e71-1a70-491f-ba0c-ad1a456345c8-kube-api-access-vm626\") pod \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\" (UID: \"f81f2e71-1a70-491f-ba0c-ad1a456345c8\") " Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.850891 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5gw6\" (UniqueName: \"kubernetes.io/projected/563516b7-0256-4c05-b1d1-3aa03d692afb-kube-api-access-t5gw6\") pod \"563516b7-0256-4c05-b1d1-3aa03d692afb\" (UID: \"563516b7-0256-4c05-b1d1-3aa03d692afb\") " Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.850944 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62fbb159-dc72-4c34-b2b7-5be6be4df981-operator-scripts\") pod \"62fbb159-dc72-4c34-b2b7-5be6be4df981\" (UID: \"62fbb159-dc72-4c34-b2b7-5be6be4df981\") " Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.850975 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/563516b7-0256-4c05-b1d1-3aa03d692afb-operator-scripts\") pod \"563516b7-0256-4c05-b1d1-3aa03d692afb\" (UID: \"563516b7-0256-4c05-b1d1-3aa03d692afb\") " Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.851934 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62fbb159-dc72-4c34-b2b7-5be6be4df981-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62fbb159-dc72-4c34-b2b7-5be6be4df981" (UID: "62fbb159-dc72-4c34-b2b7-5be6be4df981"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.852122 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62fbb159-dc72-4c34-b2b7-5be6be4df981-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.852114 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f81f2e71-1a70-491f-ba0c-ad1a456345c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f81f2e71-1a70-491f-ba0c-ad1a456345c8" (UID: "f81f2e71-1a70-491f-ba0c-ad1a456345c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.852409 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/563516b7-0256-4c05-b1d1-3aa03d692afb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "563516b7-0256-4c05-b1d1-3aa03d692afb" (UID: "563516b7-0256-4c05-b1d1-3aa03d692afb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.856758 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/563516b7-0256-4c05-b1d1-3aa03d692afb-kube-api-access-t5gw6" (OuterVolumeSpecName: "kube-api-access-t5gw6") pod "563516b7-0256-4c05-b1d1-3aa03d692afb" (UID: "563516b7-0256-4c05-b1d1-3aa03d692afb"). InnerVolumeSpecName "kube-api-access-t5gw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.856859 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62fbb159-dc72-4c34-b2b7-5be6be4df981-kube-api-access-97zzn" (OuterVolumeSpecName: "kube-api-access-97zzn") pod "62fbb159-dc72-4c34-b2b7-5be6be4df981" (UID: "62fbb159-dc72-4c34-b2b7-5be6be4df981"). InnerVolumeSpecName "kube-api-access-97zzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.857448 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f81f2e71-1a70-491f-ba0c-ad1a456345c8-kube-api-access-vm626" (OuterVolumeSpecName: "kube-api-access-vm626") pod "f81f2e71-1a70-491f-ba0c-ad1a456345c8" (UID: "f81f2e71-1a70-491f-ba0c-ad1a456345c8"). InnerVolumeSpecName "kube-api-access-vm626". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.953155 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81f2e71-1a70-491f-ba0c-ad1a456345c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.953186 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm626\" (UniqueName: \"kubernetes.io/projected/f81f2e71-1a70-491f-ba0c-ad1a456345c8-kube-api-access-vm626\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.953201 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5gw6\" (UniqueName: \"kubernetes.io/projected/563516b7-0256-4c05-b1d1-3aa03d692afb-kube-api-access-t5gw6\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.953210 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/563516b7-0256-4c05-b1d1-3aa03d692afb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:17.953220 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97zzn\" (UniqueName: \"kubernetes.io/projected/62fbb159-dc72-4c34-b2b7-5be6be4df981-kube-api-access-97zzn\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.180686 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-22a6-account-create-update-59kzd" event={"ID":"563516b7-0256-4c05-b1d1-3aa03d692afb","Type":"ContainerDied","Data":"7d69a7884cd7efe94de2ea93b06606bf6e99299116b61e5a4762af1a31d75436"} Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.180717 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d69a7884cd7efe94de2ea93b06606bf6e99299116b61e5a4762af1a31d75436" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.180790 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-22a6-account-create-update-59kzd" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.186192 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ff11-account-create-update-p5nhq" event={"ID":"f81f2e71-1a70-491f-ba0c-ad1a456345c8","Type":"ContainerDied","Data":"1635e22d747e1e9ecdb13fd83e4f66247ad344b78ffe852aa12ec1f91c0d069e"} Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.186222 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1635e22d747e1e9ecdb13fd83e4f66247ad344b78ffe852aa12ec1f91c0d069e" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.186297 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ff11-account-create-update-p5nhq" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.191802 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3a9f-account-create-update-zkbvj" event={"ID":"62fbb159-dc72-4c34-b2b7-5be6be4df981","Type":"ContainerDied","Data":"cdab6e776d028e9251c9333022bcb3bff90331c0dec32cedbd959678ebc24028"} Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.191846 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdab6e776d028e9251c9333022bcb3bff90331c0dec32cedbd959678ebc24028" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.191962 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a9f-account-create-update-zkbvj" Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.203019 4793 generic.go:334] "Generic (PLEG): container finished" podID="8c5fc335-85d3-41d9-af0a-d0e3aede352b" containerID="0a03fc4fb64bbc55f9e83e2df3c5192020b95575ac83335c13e52269467122b8" exitCode=0 Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.203062 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r6w5v" event={"ID":"8c5fc335-85d3-41d9-af0a-d0e3aede352b","Type":"ContainerDied","Data":"0a03fc4fb64bbc55f9e83e2df3c5192020b95575ac83335c13e52269467122b8"} Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.203159 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r6w5v" event={"ID":"8c5fc335-85d3-41d9-af0a-d0e3aede352b","Type":"ContainerStarted","Data":"ef7e3d86992b0608a1f5c882b1bed3724444b7f930e935580cc522ebda3d7a72"} Jan 30 14:05:18 crc kubenswrapper[4793]: I0130 14:05:18.674975 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:18 crc kubenswrapper[4793]: E0130 14:05:18.675200 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 14:05:18 crc kubenswrapper[4793]: E0130 14:05:18.675396 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 14:05:18 crc kubenswrapper[4793]: E0130 14:05:18.675447 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift podName:76182868-5b55-403e-a2be-0c6879e9a2e0 nodeName:}" failed. No retries permitted until 2026-01-30 14:05:34.675430503 +0000 UTC m=+1345.376778994 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift") pod "swift-storage-0" (UID: "76182868-5b55-403e-a2be-0c6879e9a2e0") : configmap "swift-ring-files" not found Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.214015 4793 generic.go:334] "Generic (PLEG): container finished" podID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerID="d616170562eeb4ba00ef47dc4bae339cb080a28d5310b1ec237e9ad217b38991" exitCode=0 Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.214079 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a4cd276-23a5-4acb-bb1b-41470a11c945","Type":"ContainerDied","Data":"d616170562eeb4ba00ef47dc4bae339cb080a28d5310b1ec237e9ad217b38991"} Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.216679 4793 generic.go:334] "Generic (PLEG): container finished" podID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerID="06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48" exitCode=0 Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.216792 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ab4371b-53c0-41a1-9561-0c02f936c7a7","Type":"ContainerDied","Data":"06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48"} Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.595540 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.689960 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmmjt\" (UniqueName: \"kubernetes.io/projected/8c5fc335-85d3-41d9-af0a-d0e3aede352b-kube-api-access-bmmjt\") pod \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\" (UID: \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\") " Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.690114 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c5fc335-85d3-41d9-af0a-d0e3aede352b-operator-scripts\") pod \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\" (UID: \"8c5fc335-85d3-41d9-af0a-d0e3aede352b\") " Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.690483 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c5fc335-85d3-41d9-af0a-d0e3aede352b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c5fc335-85d3-41d9-af0a-d0e3aede352b" (UID: "8c5fc335-85d3-41d9-af0a-d0e3aede352b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.690632 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c5fc335-85d3-41d9-af0a-d0e3aede352b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.695549 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c5fc335-85d3-41d9-af0a-d0e3aede352b-kube-api-access-bmmjt" (OuterVolumeSpecName: "kube-api-access-bmmjt") pod "8c5fc335-85d3-41d9-af0a-d0e3aede352b" (UID: "8c5fc335-85d3-41d9-af0a-d0e3aede352b"). InnerVolumeSpecName "kube-api-access-bmmjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.744714 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-btxs9"] Jan 30 14:05:19 crc kubenswrapper[4793]: E0130 14:05:19.745007 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98986ea8-62f3-4716-9451-0e13567ec2a1" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745025 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="98986ea8-62f3-4716-9451-0e13567ec2a1" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: E0130 14:05:19.745033 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f03641-1e63-4c88-a1f4-f58cf0d81883" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745040 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f03641-1e63-4c88-a1f4-f58cf0d81883" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: E0130 14:05:19.745068 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f81f2e71-1a70-491f-ba0c-ad1a456345c8" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745075 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f81f2e71-1a70-491f-ba0c-ad1a456345c8" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: E0130 14:05:19.745087 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d0f274e-c187-4f1a-aa78-508b1761f9fb" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745094 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d0f274e-c187-4f1a-aa78-508b1761f9fb" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: E0130 14:05:19.745110 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fbb159-dc72-4c34-b2b7-5be6be4df981" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745116 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fbb159-dc72-4c34-b2b7-5be6be4df981" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: E0130 14:05:19.745127 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="563516b7-0256-4c05-b1d1-3aa03d692afb" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745133 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="563516b7-0256-4c05-b1d1-3aa03d692afb" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: E0130 14:05:19.745150 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5fc335-85d3-41d9-af0a-d0e3aede352b" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745156 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5fc335-85d3-41d9-af0a-d0e3aede352b" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745310 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f03641-1e63-4c88-a1f4-f58cf0d81883" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745324 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c5fc335-85d3-41d9-af0a-d0e3aede352b" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745333 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d0f274e-c187-4f1a-aa78-508b1761f9fb" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745341 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="563516b7-0256-4c05-b1d1-3aa03d692afb" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745351 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="62fbb159-dc72-4c34-b2b7-5be6be4df981" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745360 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f81f2e71-1a70-491f-ba0c-ad1a456345c8" containerName="mariadb-account-create-update" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745368 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="98986ea8-62f3-4716-9451-0e13567ec2a1" containerName="mariadb-database-create" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.745815 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.754631 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.758844 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jb79g" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.761421 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-btxs9"] Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.792370 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmmjt\" (UniqueName: \"kubernetes.io/projected/8c5fc335-85d3-41d9-af0a-d0e3aede352b-kube-api-access-bmmjt\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.893345 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-config-data\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.893463 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-db-sync-config-data\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.893517 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bt5j\" (UniqueName: \"kubernetes.io/projected/2b977757-3d3e-48e5-a1e2-d31ebeda138e-kube-api-access-6bt5j\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.893542 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-combined-ca-bundle\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.994774 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bt5j\" (UniqueName: \"kubernetes.io/projected/2b977757-3d3e-48e5-a1e2-d31ebeda138e-kube-api-access-6bt5j\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.995030 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-combined-ca-bundle\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.995132 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-config-data\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.995187 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-db-sync-config-data\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.998566 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-db-sync-config-data\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.999322 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-config-data\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:19 crc kubenswrapper[4793]: I0130 14:05:19.999463 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-combined-ca-bundle\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.016637 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bt5j\" (UniqueName: \"kubernetes.io/projected/2b977757-3d3e-48e5-a1e2-d31ebeda138e-kube-api-access-6bt5j\") pod \"glance-db-sync-btxs9\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.059309 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.280567 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ab4371b-53c0-41a1-9561-0c02f936c7a7","Type":"ContainerStarted","Data":"ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa"} Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.281300 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.285384 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r6w5v" event={"ID":"8c5fc335-85d3-41d9-af0a-d0e3aede352b","Type":"ContainerDied","Data":"ef7e3d86992b0608a1f5c882b1bed3724444b7f930e935580cc522ebda3d7a72"} Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.285423 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef7e3d86992b0608a1f5c882b1bed3724444b7f930e935580cc522ebda3d7a72" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.285454 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r6w5v" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.287674 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a4cd276-23a5-4acb-bb1b-41470a11c945","Type":"ContainerStarted","Data":"b985352acd3221df1cd541d3576c66285b247ac814efbffa0d9afc52e1848265"} Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.287982 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.327071 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.641938794 podStartE2EDuration="1m16.327031613s" podCreationTimestamp="2026-01-30 14:04:04 +0000 UTC" firstStartedPulling="2026-01-30 14:04:06.939217602 +0000 UTC m=+1257.640566093" lastFinishedPulling="2026-01-30 14:04:44.624310421 +0000 UTC m=+1295.325658912" observedRunningTime="2026-01-30 14:05:20.314065526 +0000 UTC m=+1331.015414027" watchObservedRunningTime="2026-01-30 14:05:20.327031613 +0000 UTC m=+1331.028380104" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.387094 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.744207081 podStartE2EDuration="1m16.387071735s" podCreationTimestamp="2026-01-30 14:04:04 +0000 UTC" firstStartedPulling="2026-01-30 14:04:07.063342615 +0000 UTC m=+1257.764691106" lastFinishedPulling="2026-01-30 14:04:44.706207269 +0000 UTC m=+1295.407555760" observedRunningTime="2026-01-30 14:05:20.352824146 +0000 UTC m=+1331.054172647" watchObservedRunningTime="2026-01-30 14:05:20.387071735 +0000 UTC m=+1331.088420226" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.431445 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 14:05:20 crc kubenswrapper[4793]: I0130 14:05:20.578435 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-btxs9"] Jan 30 14:05:20 crc kubenswrapper[4793]: W0130 14:05:20.589325 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b977757_3d3e_48e5_a1e2_d31ebeda138e.slice/crio-7ef1978da215da441ac8cf72de6c6774bfd0f063eea75236ae6171402912d11b WatchSource:0}: Error finding container 7ef1978da215da441ac8cf72de6c6774bfd0f063eea75236ae6171402912d11b: Status 404 returned error can't find the container with id 7ef1978da215da441ac8cf72de6c6774bfd0f063eea75236ae6171402912d11b Jan 30 14:05:21 crc kubenswrapper[4793]: I0130 14:05:21.296322 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-btxs9" event={"ID":"2b977757-3d3e-48e5-a1e2-d31ebeda138e","Type":"ContainerStarted","Data":"7ef1978da215da441ac8cf72de6c6774bfd0f063eea75236ae6171402912d11b"} Jan 30 14:05:22 crc kubenswrapper[4793]: I0130 14:05:22.198618 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-r6w5v"] Jan 30 14:05:22 crc kubenswrapper[4793]: I0130 14:05:22.207264 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-r6w5v"] Jan 30 14:05:22 crc kubenswrapper[4793]: I0130 14:05:22.307452 4793 generic.go:334] "Generic (PLEG): container finished" podID="50011731-846f-4e86-8664-f9c797dc64ed" containerID="a1b8fa0ad1007024e2a758d432cfe8f804db4960d86814b080a404a5d1c5e7dd" exitCode=0 Jan 30 14:05:22 crc kubenswrapper[4793]: I0130 14:05:22.307499 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-q459t" event={"ID":"50011731-846f-4e86-8664-f9c797dc64ed","Type":"ContainerDied","Data":"a1b8fa0ad1007024e2a758d432cfe8f804db4960d86814b080a404a5d1c5e7dd"} Jan 30 14:05:22 crc kubenswrapper[4793]: I0130 14:05:22.411461 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c5fc335-85d3-41d9-af0a-d0e3aede352b" path="/var/lib/kubelet/pods/8c5fc335-85d3-41d9-af0a-d0e3aede352b/volumes" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.738603 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.854279 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-scripts\") pod \"50011731-846f-4e86-8664-f9c797dc64ed\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.854321 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-combined-ca-bundle\") pod \"50011731-846f-4e86-8664-f9c797dc64ed\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.854343 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4s46\" (UniqueName: \"kubernetes.io/projected/50011731-846f-4e86-8664-f9c797dc64ed-kube-api-access-h4s46\") pod \"50011731-846f-4e86-8664-f9c797dc64ed\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.854393 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-swiftconf\") pod \"50011731-846f-4e86-8664-f9c797dc64ed\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.854450 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-ring-data-devices\") pod \"50011731-846f-4e86-8664-f9c797dc64ed\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.855128 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/50011731-846f-4e86-8664-f9c797dc64ed-etc-swift\") pod \"50011731-846f-4e86-8664-f9c797dc64ed\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.855237 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-dispersionconf\") pod \"50011731-846f-4e86-8664-f9c797dc64ed\" (UID: \"50011731-846f-4e86-8664-f9c797dc64ed\") " Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.855366 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "50011731-846f-4e86-8664-f9c797dc64ed" (UID: "50011731-846f-4e86-8664-f9c797dc64ed"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.855796 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50011731-846f-4e86-8664-f9c797dc64ed-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "50011731-846f-4e86-8664-f9c797dc64ed" (UID: "50011731-846f-4e86-8664-f9c797dc64ed"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.856059 4793 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.856085 4793 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/50011731-846f-4e86-8664-f9c797dc64ed-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.868259 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "50011731-846f-4e86-8664-f9c797dc64ed" (UID: "50011731-846f-4e86-8664-f9c797dc64ed"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.877841 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-scripts" (OuterVolumeSpecName: "scripts") pod "50011731-846f-4e86-8664-f9c797dc64ed" (UID: "50011731-846f-4e86-8664-f9c797dc64ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.881294 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50011731-846f-4e86-8664-f9c797dc64ed-kube-api-access-h4s46" (OuterVolumeSpecName: "kube-api-access-h4s46") pod "50011731-846f-4e86-8664-f9c797dc64ed" (UID: "50011731-846f-4e86-8664-f9c797dc64ed"). InnerVolumeSpecName "kube-api-access-h4s46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.886951 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50011731-846f-4e86-8664-f9c797dc64ed" (UID: "50011731-846f-4e86-8664-f9c797dc64ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.909845 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "50011731-846f-4e86-8664-f9c797dc64ed" (UID: "50011731-846f-4e86-8664-f9c797dc64ed"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.957513 4793 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.957541 4793 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.957550 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50011731-846f-4e86-8664-f9c797dc64ed-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.957559 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50011731-846f-4e86-8664-f9c797dc64ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:23 crc kubenswrapper[4793]: I0130 14:05:23.957573 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4s46\" (UniqueName: \"kubernetes.io/projected/50011731-846f-4e86-8664-f9c797dc64ed-kube-api-access-h4s46\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:24 crc kubenswrapper[4793]: I0130 14:05:24.332460 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-q459t" event={"ID":"50011731-846f-4e86-8664-f9c797dc64ed","Type":"ContainerDied","Data":"dfcd68a21a6ccc777d3dfdabb9d0541bc18ef4395d6201dad4b19a23446f3679"} Jan 30 14:05:24 crc kubenswrapper[4793]: I0130 14:05:24.332507 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfcd68a21a6ccc777d3dfdabb9d0541bc18ef4395d6201dad4b19a23446f3679" Jan 30 14:05:24 crc kubenswrapper[4793]: I0130 14:05:24.332588 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q459t" Jan 30 14:05:24 crc kubenswrapper[4793]: I0130 14:05:24.638843 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:05:24 crc kubenswrapper[4793]: I0130 14:05:24.696750 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-45fd5" podUID="230700ff-5087-4d0d-9d93-90b597d2ef72" containerName="ovn-controller" probeResult="failure" output=< Jan 30 14:05:24 crc kubenswrapper[4793]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 14:05:24 crc kubenswrapper[4793]: > Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.218257 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ktlrj"] Jan 30 14:05:27 crc kubenswrapper[4793]: E0130 14:05:27.218534 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50011731-846f-4e86-8664-f9c797dc64ed" containerName="swift-ring-rebalance" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.218545 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="50011731-846f-4e86-8664-f9c797dc64ed" containerName="swift-ring-rebalance" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.218698 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="50011731-846f-4e86-8664-f9c797dc64ed" containerName="swift-ring-rebalance" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.219164 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.222848 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.241838 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ktlrj"] Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.412072 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvdfr\" (UniqueName: \"kubernetes.io/projected/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-kube-api-access-kvdfr\") pod \"root-account-create-update-ktlrj\" (UID: \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\") " pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.412225 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-operator-scripts\") pod \"root-account-create-update-ktlrj\" (UID: \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\") " pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.514562 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-operator-scripts\") pod \"root-account-create-update-ktlrj\" (UID: \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\") " pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.515004 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvdfr\" (UniqueName: \"kubernetes.io/projected/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-kube-api-access-kvdfr\") pod \"root-account-create-update-ktlrj\" (UID: \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\") " pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.515406 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-operator-scripts\") pod \"root-account-create-update-ktlrj\" (UID: \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\") " pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.536605 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvdfr\" (UniqueName: \"kubernetes.io/projected/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-kube-api-access-kvdfr\") pod \"root-account-create-update-ktlrj\" (UID: \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\") " pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:27 crc kubenswrapper[4793]: I0130 14:05:27.602144 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.606402 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-56x4d" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.707596 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-45fd5" podUID="230700ff-5087-4d0d-9d93-90b597d2ef72" containerName="ovn-controller" probeResult="failure" output=< Jan 30 14:05:29 crc kubenswrapper[4793]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 14:05:29 crc kubenswrapper[4793]: > Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.847778 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-45fd5-config-7cmw2"] Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.848785 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.851104 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.852958 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-additional-scripts\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.853063 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-scripts\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.853086 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-log-ovn\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.853105 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run-ovn\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.853128 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.853157 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtd99\" (UniqueName: \"kubernetes.io/projected/afab5fb9-07ec-48e9-b50b-28e47d11942b-kube-api-access-rtd99\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.876850 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-45fd5-config-7cmw2"] Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.954995 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-log-ovn\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.955071 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run-ovn\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.955096 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.955154 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtd99\" (UniqueName: \"kubernetes.io/projected/afab5fb9-07ec-48e9-b50b-28e47d11942b-kube-api-access-rtd99\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.955232 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-additional-scripts\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.955314 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-scripts\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.955714 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run-ovn\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.955737 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-log-ovn\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.955968 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.956552 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-additional-scripts\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.960196 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-scripts\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:29 crc kubenswrapper[4793]: I0130 14:05:29.991348 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtd99\" (UniqueName: \"kubernetes.io/projected/afab5fb9-07ec-48e9-b50b-28e47d11942b-kube-api-access-rtd99\") pod \"ovn-controller-45fd5-config-7cmw2\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:30 crc kubenswrapper[4793]: I0130 14:05:30.178896 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:33 crc kubenswrapper[4793]: I0130 14:05:33.813333 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-45fd5-config-7cmw2"] Jan 30 14:05:33 crc kubenswrapper[4793]: I0130 14:05:33.929244 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ktlrj"] Jan 30 14:05:33 crc kubenswrapper[4793]: W0130 14:05:33.929667 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec365c0b_f8d9_4b59_bb89_a583d1eb7257.slice/crio-6923621ecca2ecd3d9e485cf5299f11163ef081541fe789ba548d0113b594a43 WatchSource:0}: Error finding container 6923621ecca2ecd3d9e485cf5299f11163ef081541fe789ba548d0113b594a43: Status 404 returned error can't find the container with id 6923621ecca2ecd3d9e485cf5299f11163ef081541fe789ba548d0113b594a43 Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.420809 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-btxs9" event={"ID":"2b977757-3d3e-48e5-a1e2-d31ebeda138e","Type":"ContainerStarted","Data":"aba07025654ae635089a8f296dddf9cfb274c709f33abf63aa5399408783166c"} Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.422677 4793 generic.go:334] "Generic (PLEG): container finished" podID="afab5fb9-07ec-48e9-b50b-28e47d11942b" containerID="915b433bd8f492e1285f7731f190606a27443ef65efaea3a89e0a1143cdf8065" exitCode=0 Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.422763 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-45fd5-config-7cmw2" event={"ID":"afab5fb9-07ec-48e9-b50b-28e47d11942b","Type":"ContainerDied","Data":"915b433bd8f492e1285f7731f190606a27443ef65efaea3a89e0a1143cdf8065"} Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.422795 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-45fd5-config-7cmw2" event={"ID":"afab5fb9-07ec-48e9-b50b-28e47d11942b","Type":"ContainerStarted","Data":"3183ceacb40c43d1a8e662c19d9461e4ddb8e55c500e70d8862604cd360f4f8b"} Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.425028 4793 generic.go:334] "Generic (PLEG): container finished" podID="ec365c0b-f8d9-4b59-bb89-a583d1eb7257" containerID="49617378d146339946d69a33ebd155e69d9eb4e257e62cbaa6d931330bc913ba" exitCode=0 Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.425124 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ktlrj" event={"ID":"ec365c0b-f8d9-4b59-bb89-a583d1eb7257","Type":"ContainerDied","Data":"49617378d146339946d69a33ebd155e69d9eb4e257e62cbaa6d931330bc913ba"} Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.425150 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ktlrj" event={"ID":"ec365c0b-f8d9-4b59-bb89-a583d1eb7257","Type":"ContainerStarted","Data":"6923621ecca2ecd3d9e485cf5299f11163ef081541fe789ba548d0113b594a43"} Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.449907 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-btxs9" podStartSLOduration=2.6111264050000003 podStartE2EDuration="15.449886529s" podCreationTimestamp="2026-01-30 14:05:19 +0000 UTC" firstStartedPulling="2026-01-30 14:05:20.591851456 +0000 UTC m=+1331.293199947" lastFinishedPulling="2026-01-30 14:05:33.43061156 +0000 UTC m=+1344.131960071" observedRunningTime="2026-01-30 14:05:34.440629902 +0000 UTC m=+1345.141978403" watchObservedRunningTime="2026-01-30 14:05:34.449886529 +0000 UTC m=+1345.151235020" Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.689255 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-45fd5" Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.751400 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:34 crc kubenswrapper[4793]: I0130 14:05:34.759480 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76182868-5b55-403e-a2be-0c6879e9a2e0-etc-swift\") pod \"swift-storage-0\" (UID: \"76182868-5b55-403e-a2be-0c6879e9a2e0\") " pod="openstack/swift-storage-0" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.057954 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.443465 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.833126 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.840439 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971099 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-additional-scripts\") pod \"afab5fb9-07ec-48e9-b50b-28e47d11942b\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971223 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run-ovn\") pod \"afab5fb9-07ec-48e9-b50b-28e47d11942b\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971252 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-scripts\") pod \"afab5fb9-07ec-48e9-b50b-28e47d11942b\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971310 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "afab5fb9-07ec-48e9-b50b-28e47d11942b" (UID: "afab5fb9-07ec-48e9-b50b-28e47d11942b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971375 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-log-ovn\") pod \"afab5fb9-07ec-48e9-b50b-28e47d11942b\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971452 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvdfr\" (UniqueName: \"kubernetes.io/projected/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-kube-api-access-kvdfr\") pod \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\" (UID: \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\") " Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971471 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtd99\" (UniqueName: \"kubernetes.io/projected/afab5fb9-07ec-48e9-b50b-28e47d11942b-kube-api-access-rtd99\") pod \"afab5fb9-07ec-48e9-b50b-28e47d11942b\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.972202 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-operator-scripts\") pod \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\" (UID: \"ec365c0b-f8d9-4b59-bb89-a583d1eb7257\") " Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971403 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "afab5fb9-07ec-48e9-b50b-28e47d11942b" (UID: "afab5fb9-07ec-48e9-b50b-28e47d11942b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.971997 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "afab5fb9-07ec-48e9-b50b-28e47d11942b" (UID: "afab5fb9-07ec-48e9-b50b-28e47d11942b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.972251 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-scripts" (OuterVolumeSpecName: "scripts") pod "afab5fb9-07ec-48e9-b50b-28e47d11942b" (UID: "afab5fb9-07ec-48e9-b50b-28e47d11942b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.972687 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec365c0b-f8d9-4b59-bb89-a583d1eb7257" (UID: "ec365c0b-f8d9-4b59-bb89-a583d1eb7257"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.972723 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run\") pod \"afab5fb9-07ec-48e9-b50b-28e47d11942b\" (UID: \"afab5fb9-07ec-48e9-b50b-28e47d11942b\") " Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.972784 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run" (OuterVolumeSpecName: "var-run") pod "afab5fb9-07ec-48e9-b50b-28e47d11942b" (UID: "afab5fb9-07ec-48e9-b50b-28e47d11942b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.972968 4793 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.972984 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.972995 4793 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.973004 4793 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.973012 4793 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/afab5fb9-07ec-48e9-b50b-28e47d11942b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.973020 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afab5fb9-07ec-48e9-b50b-28e47d11942b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.976520 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afab5fb9-07ec-48e9-b50b-28e47d11942b-kube-api-access-rtd99" (OuterVolumeSpecName: "kube-api-access-rtd99") pod "afab5fb9-07ec-48e9-b50b-28e47d11942b" (UID: "afab5fb9-07ec-48e9-b50b-28e47d11942b"). InnerVolumeSpecName "kube-api-access-rtd99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:35 crc kubenswrapper[4793]: I0130 14:05:35.976614 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-kube-api-access-kvdfr" (OuterVolumeSpecName: "kube-api-access-kvdfr") pod "ec365c0b-f8d9-4b59-bb89-a583d1eb7257" (UID: "ec365c0b-f8d9-4b59-bb89-a583d1eb7257"). InnerVolumeSpecName "kube-api-access-kvdfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.074430 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvdfr\" (UniqueName: \"kubernetes.io/projected/ec365c0b-f8d9-4b59-bb89-a583d1eb7257-kube-api-access-kvdfr\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.074462 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtd99\" (UniqueName: \"kubernetes.io/projected/afab5fb9-07ec-48e9-b50b-28e47d11942b-kube-api-access-rtd99\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.081262 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.238007 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.455726 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-gvh75"] Jan 30 14:05:36 crc kubenswrapper[4793]: E0130 14:05:36.456042 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec365c0b-f8d9-4b59-bb89-a583d1eb7257" containerName="mariadb-account-create-update" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.456073 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec365c0b-f8d9-4b59-bb89-a583d1eb7257" containerName="mariadb-account-create-update" Jan 30 14:05:36 crc kubenswrapper[4793]: E0130 14:05:36.456085 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afab5fb9-07ec-48e9-b50b-28e47d11942b" containerName="ovn-config" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.456093 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="afab5fb9-07ec-48e9-b50b-28e47d11942b" containerName="ovn-config" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.456258 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec365c0b-f8d9-4b59-bb89-a583d1eb7257" containerName="mariadb-account-create-update" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.456279 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="afab5fb9-07ec-48e9-b50b-28e47d11942b" containerName="ovn-config" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.456752 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.464980 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"c2c4fd28411dc4300f936e163f6ecb733dff5d088151b768ba5cc48730783c5f"} Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.475322 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-45fd5-config-7cmw2" event={"ID":"afab5fb9-07ec-48e9-b50b-28e47d11942b","Type":"ContainerDied","Data":"3183ceacb40c43d1a8e662c19d9461e4ddb8e55c500e70d8862604cd360f4f8b"} Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.475358 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3183ceacb40c43d1a8e662c19d9461e4ddb8e55c500e70d8862604cd360f4f8b" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.475416 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-45fd5-config-7cmw2" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.486876 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ktlrj" event={"ID":"ec365c0b-f8d9-4b59-bb89-a583d1eb7257","Type":"ContainerDied","Data":"6923621ecca2ecd3d9e485cf5299f11163ef081541fe789ba548d0113b594a43"} Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.487154 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6923621ecca2ecd3d9e485cf5299f11163ef081541fe789ba548d0113b594a43" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.487011 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ktlrj" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.498889 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gvh75"] Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.583960 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-operator-scripts\") pod \"cinder-db-create-gvh75\" (UID: \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\") " pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.584025 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjl27\" (UniqueName: \"kubernetes.io/projected/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-kube-api-access-gjl27\") pod \"cinder-db-create-gvh75\" (UID: \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\") " pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.641933 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-29ee-account-create-update-56zfp"] Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.643113 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.645371 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.672608 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-29ee-account-create-update-56zfp"] Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.685298 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-operator-scripts\") pod \"cinder-db-create-gvh75\" (UID: \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\") " pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.685343 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjl27\" (UniqueName: \"kubernetes.io/projected/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-kube-api-access-gjl27\") pod \"cinder-db-create-gvh75\" (UID: \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\") " pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.686295 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-operator-scripts\") pod \"cinder-db-create-gvh75\" (UID: \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\") " pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.729794 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjl27\" (UniqueName: \"kubernetes.io/projected/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-kube-api-access-gjl27\") pod \"cinder-db-create-gvh75\" (UID: \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\") " pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.758620 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-89mld"] Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.759566 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-89mld" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.787603 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw8kt\" (UniqueName: \"kubernetes.io/projected/2392ab6f-ca9b-4211-bd23-a243ce0ee554-kube-api-access-tw8kt\") pod \"barbican-29ee-account-create-update-56zfp\" (UID: \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\") " pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.787708 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2392ab6f-ca9b-4211-bd23-a243ce0ee554-operator-scripts\") pod \"barbican-29ee-account-create-update-56zfp\" (UID: \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\") " pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.789222 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-89mld"] Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.800760 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-3f03-account-create-update-s5gbm"] Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.801826 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.806254 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.841269 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.881435 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3f03-account-create-update-s5gbm"] Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.889552 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2392ab6f-ca9b-4211-bd23-a243ce0ee554-operator-scripts\") pod \"barbican-29ee-account-create-update-56zfp\" (UID: \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\") " pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.889605 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5mhn\" (UniqueName: \"kubernetes.io/projected/13613099-2932-4476-8032-82095348fb10-kube-api-access-t5mhn\") pod \"barbican-db-create-89mld\" (UID: \"13613099-2932-4476-8032-82095348fb10\") " pod="openstack/barbican-db-create-89mld" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.889634 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c07a623-53fe-44a2-9810-5d1137c659c3-operator-scripts\") pod \"cinder-3f03-account-create-update-s5gbm\" (UID: \"6c07a623-53fe-44a2-9810-5d1137c659c3\") " pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.889683 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsgsl\" (UniqueName: \"kubernetes.io/projected/6c07a623-53fe-44a2-9810-5d1137c659c3-kube-api-access-wsgsl\") pod \"cinder-3f03-account-create-update-s5gbm\" (UID: \"6c07a623-53fe-44a2-9810-5d1137c659c3\") " pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.889704 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13613099-2932-4476-8032-82095348fb10-operator-scripts\") pod \"barbican-db-create-89mld\" (UID: \"13613099-2932-4476-8032-82095348fb10\") " pod="openstack/barbican-db-create-89mld" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.889722 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw8kt\" (UniqueName: \"kubernetes.io/projected/2392ab6f-ca9b-4211-bd23-a243ce0ee554-kube-api-access-tw8kt\") pod \"barbican-29ee-account-create-update-56zfp\" (UID: \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\") " pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.890676 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2392ab6f-ca9b-4211-bd23-a243ce0ee554-operator-scripts\") pod \"barbican-29ee-account-create-update-56zfp\" (UID: \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\") " pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.929991 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw8kt\" (UniqueName: \"kubernetes.io/projected/2392ab6f-ca9b-4211-bd23-a243ce0ee554-kube-api-access-tw8kt\") pod \"barbican-29ee-account-create-update-56zfp\" (UID: \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\") " pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.959003 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.974122 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-t2ntm"] Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.975213 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.993441 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5mhn\" (UniqueName: \"kubernetes.io/projected/13613099-2932-4476-8032-82095348fb10-kube-api-access-t5mhn\") pod \"barbican-db-create-89mld\" (UID: \"13613099-2932-4476-8032-82095348fb10\") " pod="openstack/barbican-db-create-89mld" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.993475 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c07a623-53fe-44a2-9810-5d1137c659c3-operator-scripts\") pod \"cinder-3f03-account-create-update-s5gbm\" (UID: \"6c07a623-53fe-44a2-9810-5d1137c659c3\") " pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.993530 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsgsl\" (UniqueName: \"kubernetes.io/projected/6c07a623-53fe-44a2-9810-5d1137c659c3-kube-api-access-wsgsl\") pod \"cinder-3f03-account-create-update-s5gbm\" (UID: \"6c07a623-53fe-44a2-9810-5d1137c659c3\") " pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.993550 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13613099-2932-4476-8032-82095348fb10-operator-scripts\") pod \"barbican-db-create-89mld\" (UID: \"13613099-2932-4476-8032-82095348fb10\") " pod="openstack/barbican-db-create-89mld" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.994329 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13613099-2932-4476-8032-82095348fb10-operator-scripts\") pod \"barbican-db-create-89mld\" (UID: \"13613099-2932-4476-8032-82095348fb10\") " pod="openstack/barbican-db-create-89mld" Jan 30 14:05:36 crc kubenswrapper[4793]: I0130 14:05:36.995016 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c07a623-53fe-44a2-9810-5d1137c659c3-operator-scripts\") pod \"cinder-3f03-account-create-update-s5gbm\" (UID: \"6c07a623-53fe-44a2-9810-5d1137c659c3\") " pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.027819 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-t2ntm"] Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.049176 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsgsl\" (UniqueName: \"kubernetes.io/projected/6c07a623-53fe-44a2-9810-5d1137c659c3-kube-api-access-wsgsl\") pod \"cinder-3f03-account-create-update-s5gbm\" (UID: \"6c07a623-53fe-44a2-9810-5d1137c659c3\") " pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.053896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5mhn\" (UniqueName: \"kubernetes.io/projected/13613099-2932-4476-8032-82095348fb10-kube-api-access-t5mhn\") pod \"barbican-db-create-89mld\" (UID: \"13613099-2932-4476-8032-82095348fb10\") " pod="openstack/barbican-db-create-89mld" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.077159 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-ac9c-account-create-update-6cnjz"] Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.078215 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.080200 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.082945 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-89mld" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.086816 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ac9c-account-create-update-6cnjz"] Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.095656 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rq77\" (UniqueName: \"kubernetes.io/projected/e00abb05-5932-47c8-9bd4-34014f966013-kube-api-access-7rq77\") pod \"neutron-db-create-t2ntm\" (UID: \"e00abb05-5932-47c8-9bd4-34014f966013\") " pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.095770 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e00abb05-5932-47c8-9bd4-34014f966013-operator-scripts\") pod \"neutron-db-create-t2ntm\" (UID: \"e00abb05-5932-47c8-9bd4-34014f966013\") " pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.123613 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-45fd5-config-7cmw2"] Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.127579 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.132808 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-45fd5-config-7cmw2"] Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.148891 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-zbw76"] Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.150241 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.154556 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.154772 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zbw76"] Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.156516 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.156669 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.166985 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nv6pf" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.196823 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv6g7\" (UniqueName: \"kubernetes.io/projected/1f786311-b5ef-427f-b167-c49267de28c6-kube-api-access-cv6g7\") pod \"neutron-ac9c-account-create-update-6cnjz\" (UID: \"1f786311-b5ef-427f-b167-c49267de28c6\") " pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.196895 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rq77\" (UniqueName: \"kubernetes.io/projected/e00abb05-5932-47c8-9bd4-34014f966013-kube-api-access-7rq77\") pod \"neutron-db-create-t2ntm\" (UID: \"e00abb05-5932-47c8-9bd4-34014f966013\") " pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.196980 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f786311-b5ef-427f-b167-c49267de28c6-operator-scripts\") pod \"neutron-ac9c-account-create-update-6cnjz\" (UID: \"1f786311-b5ef-427f-b167-c49267de28c6\") " pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.197003 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e00abb05-5932-47c8-9bd4-34014f966013-operator-scripts\") pod \"neutron-db-create-t2ntm\" (UID: \"e00abb05-5932-47c8-9bd4-34014f966013\") " pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.197697 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e00abb05-5932-47c8-9bd4-34014f966013-operator-scripts\") pod \"neutron-db-create-t2ntm\" (UID: \"e00abb05-5932-47c8-9bd4-34014f966013\") " pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.214695 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rq77\" (UniqueName: \"kubernetes.io/projected/e00abb05-5932-47c8-9bd4-34014f966013-kube-api-access-7rq77\") pod \"neutron-db-create-t2ntm\" (UID: \"e00abb05-5932-47c8-9bd4-34014f966013\") " pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.298763 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-combined-ca-bundle\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.298826 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-config-data\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.298848 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f786311-b5ef-427f-b167-c49267de28c6-operator-scripts\") pod \"neutron-ac9c-account-create-update-6cnjz\" (UID: \"1f786311-b5ef-427f-b167-c49267de28c6\") " pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.298902 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcb5r\" (UniqueName: \"kubernetes.io/projected/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-kube-api-access-xcb5r\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.298923 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv6g7\" (UniqueName: \"kubernetes.io/projected/1f786311-b5ef-427f-b167-c49267de28c6-kube-api-access-cv6g7\") pod \"neutron-ac9c-account-create-update-6cnjz\" (UID: \"1f786311-b5ef-427f-b167-c49267de28c6\") " pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.299758 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f786311-b5ef-427f-b167-c49267de28c6-operator-scripts\") pod \"neutron-ac9c-account-create-update-6cnjz\" (UID: \"1f786311-b5ef-427f-b167-c49267de28c6\") " pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.314130 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.317657 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv6g7\" (UniqueName: \"kubernetes.io/projected/1f786311-b5ef-427f-b167-c49267de28c6-kube-api-access-cv6g7\") pod \"neutron-ac9c-account-create-update-6cnjz\" (UID: \"1f786311-b5ef-427f-b167-c49267de28c6\") " pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.400266 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-combined-ca-bundle\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.400340 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-config-data\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.400402 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcb5r\" (UniqueName: \"kubernetes.io/projected/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-kube-api-access-xcb5r\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.401748 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.404813 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-combined-ca-bundle\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.406709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-config-data\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.419184 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcb5r\" (UniqueName: \"kubernetes.io/projected/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-kube-api-access-xcb5r\") pod \"keystone-db-sync-zbw76\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:37 crc kubenswrapper[4793]: I0130 14:05:37.463409 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.192373 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gvh75"] Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.263171 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-89mld"] Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.289557 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-29ee-account-create-update-56zfp"] Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.303446 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-t2ntm"] Jan 30 14:05:38 crc kubenswrapper[4793]: W0130 14:05:38.451132 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcaec468e_bf72_4c93_8b47_6aac4c7a0b3d.slice/crio-73bb4553c0d51c829203402dacc690b0897fb164b96704ad8590b84c04119a3c WatchSource:0}: Error finding container 73bb4553c0d51c829203402dacc690b0897fb164b96704ad8590b84c04119a3c: Status 404 returned error can't find the container with id 73bb4553c0d51c829203402dacc690b0897fb164b96704ad8590b84c04119a3c Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.453736 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afab5fb9-07ec-48e9-b50b-28e47d11942b" path="/var/lib/kubelet/pods/afab5fb9-07ec-48e9-b50b-28e47d11942b/volumes" Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.471928 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3f03-account-create-update-s5gbm"] Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.474262 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zbw76"] Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.522475 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ac9c-account-create-update-6cnjz"] Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.544201 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-29ee-account-create-update-56zfp" event={"ID":"2392ab6f-ca9b-4211-bd23-a243ce0ee554","Type":"ContainerStarted","Data":"d4cf9631195a64608c3f002c83e4f091ee13070d383c3da9feede1c63959b9ad"} Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.562033 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zbw76" event={"ID":"caec468e-bf72-4c93-8b47-6aac4c7a0b3d","Type":"ContainerStarted","Data":"73bb4553c0d51c829203402dacc690b0897fb164b96704ad8590b84c04119a3c"} Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.580095 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"c51b1feaec54051ed2fbb26721cebf026aa34164ecab75afe8fb181253d7cf07"} Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.580146 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"1d8b5c5b0c9368bfd86c628db2535079b0cc886d06e9ceb9edd83c4cc416215b"} Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.593857 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t2ntm" event={"ID":"e00abb05-5932-47c8-9bd4-34014f966013","Type":"ContainerStarted","Data":"1021ce56a65f1678d6067bce77001cc3379da23303902ddfacdf17e2cf71d0d6"} Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.605320 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3f03-account-create-update-s5gbm" event={"ID":"6c07a623-53fe-44a2-9810-5d1137c659c3","Type":"ContainerStarted","Data":"ee48e1466c00be71a5cc4e94080113b3179b45afeb01e2591c730c312c7e1330"} Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.624693 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gvh75" event={"ID":"bfa3c464-d85c-4ea1-816e-7dda86dbb9de","Type":"ContainerStarted","Data":"a98469b953fdea84db2353b46820e7ccea308550c6d0675a79c61f90585562e6"} Jan 30 14:05:38 crc kubenswrapper[4793]: I0130 14:05:38.634003 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-89mld" event={"ID":"13613099-2932-4476-8032-82095348fb10","Type":"ContainerStarted","Data":"0d6cb9581f933e041346e0d413379c356e5ec4a01767e314546263b6c74898b2"} Jan 30 14:05:39 crc kubenswrapper[4793]: I0130 14:05:39.642006 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ac9c-account-create-update-6cnjz" event={"ID":"1f786311-b5ef-427f-b167-c49267de28c6","Type":"ContainerStarted","Data":"2deeaef8b972645a1d4c815ad2b00a78dfaff0b6cd39c4e7e87229596ae5df93"} Jan 30 14:05:41 crc kubenswrapper[4793]: I0130 14:05:41.657069 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gvh75" event={"ID":"bfa3c464-d85c-4ea1-816e-7dda86dbb9de","Type":"ContainerStarted","Data":"73aa5ec3639d3c82bba61c660ee7af7a234ef59082634808ca0ab14cf7b0d8b7"} Jan 30 14:05:41 crc kubenswrapper[4793]: I0130 14:05:41.658725 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"7a5ace428948da31f74e2caec8ff49c143ac2f3ff7117ecf46cd32e1d24edde9"} Jan 30 14:05:41 crc kubenswrapper[4793]: I0130 14:05:41.682873 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-gvh75" podStartSLOduration=5.6828521720000005 podStartE2EDuration="5.682852172s" podCreationTimestamp="2026-01-30 14:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:41.671932004 +0000 UTC m=+1352.373280505" watchObservedRunningTime="2026-01-30 14:05:41.682852172 +0000 UTC m=+1352.384200733" Jan 30 14:05:42 crc kubenswrapper[4793]: I0130 14:05:42.413893 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:05:42 crc kubenswrapper[4793]: I0130 14:05:42.413972 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:05:42 crc kubenswrapper[4793]: I0130 14:05:42.668734 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"39b8bf95080274fbc27d3409af96f8cd4dee705879ecae4910ae82cb5c5960e8"} Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.676423 4793 generic.go:334] "Generic (PLEG): container finished" podID="6c07a623-53fe-44a2-9810-5d1137c659c3" containerID="b3caaa69aab524adb26fd9c4ff43996ac15d6994d1472ccaa076a079e9b6dba0" exitCode=0 Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.676536 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3f03-account-create-update-s5gbm" event={"ID":"6c07a623-53fe-44a2-9810-5d1137c659c3","Type":"ContainerDied","Data":"b3caaa69aab524adb26fd9c4ff43996ac15d6994d1472ccaa076a079e9b6dba0"} Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.678022 4793 generic.go:334] "Generic (PLEG): container finished" podID="bfa3c464-d85c-4ea1-816e-7dda86dbb9de" containerID="73aa5ec3639d3c82bba61c660ee7af7a234ef59082634808ca0ab14cf7b0d8b7" exitCode=0 Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.678105 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gvh75" event={"ID":"bfa3c464-d85c-4ea1-816e-7dda86dbb9de","Type":"ContainerDied","Data":"73aa5ec3639d3c82bba61c660ee7af7a234ef59082634808ca0ab14cf7b0d8b7"} Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.679872 4793 generic.go:334] "Generic (PLEG): container finished" podID="13613099-2932-4476-8032-82095348fb10" containerID="75d0a8131037e3e42e5261a0799894acdf4d57f9756c3dd89c681177ee69f801" exitCode=0 Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.679938 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-89mld" event={"ID":"13613099-2932-4476-8032-82095348fb10","Type":"ContainerDied","Data":"75d0a8131037e3e42e5261a0799894acdf4d57f9756c3dd89c681177ee69f801"} Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.681746 4793 generic.go:334] "Generic (PLEG): container finished" podID="2392ab6f-ca9b-4211-bd23-a243ce0ee554" containerID="88e81edcf2367a38a7b0e1df9af6001a75b1047fd8c5d669cd70d0dad383c305" exitCode=0 Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.681786 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-29ee-account-create-update-56zfp" event={"ID":"2392ab6f-ca9b-4211-bd23-a243ce0ee554","Type":"ContainerDied","Data":"88e81edcf2367a38a7b0e1df9af6001a75b1047fd8c5d669cd70d0dad383c305"} Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.683528 4793 generic.go:334] "Generic (PLEG): container finished" podID="e00abb05-5932-47c8-9bd4-34014f966013" containerID="4a2aafe80408cac269537f00f3232599775bbba2b58f84e2c22d7bc9ff168a56" exitCode=0 Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.683566 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t2ntm" event={"ID":"e00abb05-5932-47c8-9bd4-34014f966013","Type":"ContainerDied","Data":"4a2aafe80408cac269537f00f3232599775bbba2b58f84e2c22d7bc9ff168a56"} Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.684952 4793 generic.go:334] "Generic (PLEG): container finished" podID="1f786311-b5ef-427f-b167-c49267de28c6" containerID="be7f675ca5c9219f83817d0e2dc9af6d1edad5191618166a3b580984eb47dd17" exitCode=0 Jan 30 14:05:43 crc kubenswrapper[4793]: I0130 14:05:43.684979 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ac9c-account-create-update-6cnjz" event={"ID":"1f786311-b5ef-427f-b167-c49267de28c6","Type":"ContainerDied","Data":"be7f675ca5c9219f83817d0e2dc9af6d1edad5191618166a3b580984eb47dd17"} Jan 30 14:05:44 crc kubenswrapper[4793]: I0130 14:05:44.698398 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"dd13069ceb47825909b33bb601082e34ff4af97379264c16584ddabfa433c75f"} Jan 30 14:05:44 crc kubenswrapper[4793]: I0130 14:05:44.698999 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"05599580ba24b8de745bbb2423d18a9f5f1082fb5f2e3834df84741cbe48e2a8"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.520905 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-89mld" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.539644 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.547958 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.553194 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.559949 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.584086 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.644769 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjl27\" (UniqueName: \"kubernetes.io/projected/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-kube-api-access-gjl27\") pod \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\" (UID: \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.644905 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv6g7\" (UniqueName: \"kubernetes.io/projected/1f786311-b5ef-427f-b167-c49267de28c6-kube-api-access-cv6g7\") pod \"1f786311-b5ef-427f-b167-c49267de28c6\" (UID: \"1f786311-b5ef-427f-b167-c49267de28c6\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.644945 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw8kt\" (UniqueName: \"kubernetes.io/projected/2392ab6f-ca9b-4211-bd23-a243ce0ee554-kube-api-access-tw8kt\") pod \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\" (UID: \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.645003 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5mhn\" (UniqueName: \"kubernetes.io/projected/13613099-2932-4476-8032-82095348fb10-kube-api-access-t5mhn\") pod \"13613099-2932-4476-8032-82095348fb10\" (UID: \"13613099-2932-4476-8032-82095348fb10\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.645039 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13613099-2932-4476-8032-82095348fb10-operator-scripts\") pod \"13613099-2932-4476-8032-82095348fb10\" (UID: \"13613099-2932-4476-8032-82095348fb10\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.645084 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2392ab6f-ca9b-4211-bd23-a243ce0ee554-operator-scripts\") pod \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\" (UID: \"2392ab6f-ca9b-4211-bd23-a243ce0ee554\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.645110 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f786311-b5ef-427f-b167-c49267de28c6-operator-scripts\") pod \"1f786311-b5ef-427f-b167-c49267de28c6\" (UID: \"1f786311-b5ef-427f-b167-c49267de28c6\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.645142 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e00abb05-5932-47c8-9bd4-34014f966013-operator-scripts\") pod \"e00abb05-5932-47c8-9bd4-34014f966013\" (UID: \"e00abb05-5932-47c8-9bd4-34014f966013\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.645228 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rq77\" (UniqueName: \"kubernetes.io/projected/e00abb05-5932-47c8-9bd4-34014f966013-kube-api-access-7rq77\") pod \"e00abb05-5932-47c8-9bd4-34014f966013\" (UID: \"e00abb05-5932-47c8-9bd4-34014f966013\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.645282 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-operator-scripts\") pod \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\" (UID: \"bfa3c464-d85c-4ea1-816e-7dda86dbb9de\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.647334 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2392ab6f-ca9b-4211-bd23-a243ce0ee554-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2392ab6f-ca9b-4211-bd23-a243ce0ee554" (UID: "2392ab6f-ca9b-4211-bd23-a243ce0ee554"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.647403 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f786311-b5ef-427f-b167-c49267de28c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f786311-b5ef-427f-b167-c49267de28c6" (UID: "1f786311-b5ef-427f-b167-c49267de28c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.647403 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e00abb05-5932-47c8-9bd4-34014f966013-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e00abb05-5932-47c8-9bd4-34014f966013" (UID: "e00abb05-5932-47c8-9bd4-34014f966013"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.648508 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13613099-2932-4476-8032-82095348fb10-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13613099-2932-4476-8032-82095348fb10" (UID: "13613099-2932-4476-8032-82095348fb10"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.648771 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bfa3c464-d85c-4ea1-816e-7dda86dbb9de" (UID: "bfa3c464-d85c-4ea1-816e-7dda86dbb9de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.651972 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f786311-b5ef-427f-b167-c49267de28c6-kube-api-access-cv6g7" (OuterVolumeSpecName: "kube-api-access-cv6g7") pod "1f786311-b5ef-427f-b167-c49267de28c6" (UID: "1f786311-b5ef-427f-b167-c49267de28c6"). InnerVolumeSpecName "kube-api-access-cv6g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.652754 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-kube-api-access-gjl27" (OuterVolumeSpecName: "kube-api-access-gjl27") pod "bfa3c464-d85c-4ea1-816e-7dda86dbb9de" (UID: "bfa3c464-d85c-4ea1-816e-7dda86dbb9de"). InnerVolumeSpecName "kube-api-access-gjl27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.652902 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2392ab6f-ca9b-4211-bd23-a243ce0ee554-kube-api-access-tw8kt" (OuterVolumeSpecName: "kube-api-access-tw8kt") pod "2392ab6f-ca9b-4211-bd23-a243ce0ee554" (UID: "2392ab6f-ca9b-4211-bd23-a243ce0ee554"). InnerVolumeSpecName "kube-api-access-tw8kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.653851 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e00abb05-5932-47c8-9bd4-34014f966013-kube-api-access-7rq77" (OuterVolumeSpecName: "kube-api-access-7rq77") pod "e00abb05-5932-47c8-9bd4-34014f966013" (UID: "e00abb05-5932-47c8-9bd4-34014f966013"). InnerVolumeSpecName "kube-api-access-7rq77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.661736 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13613099-2932-4476-8032-82095348fb10-kube-api-access-t5mhn" (OuterVolumeSpecName: "kube-api-access-t5mhn") pod "13613099-2932-4476-8032-82095348fb10" (UID: "13613099-2932-4476-8032-82095348fb10"). InnerVolumeSpecName "kube-api-access-t5mhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.723124 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gvh75" event={"ID":"bfa3c464-d85c-4ea1-816e-7dda86dbb9de","Type":"ContainerDied","Data":"a98469b953fdea84db2353b46820e7ccea308550c6d0675a79c61f90585562e6"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.723178 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a98469b953fdea84db2353b46820e7ccea308550c6d0675a79c61f90585562e6" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.723149 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gvh75" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.726783 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-89mld" event={"ID":"13613099-2932-4476-8032-82095348fb10","Type":"ContainerDied","Data":"0d6cb9581f933e041346e0d413379c356e5ec4a01767e314546263b6c74898b2"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.726865 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d6cb9581f933e041346e0d413379c356e5ec4a01767e314546263b6c74898b2" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.726937 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-89mld" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.730210 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-29ee-account-create-update-56zfp" event={"ID":"2392ab6f-ca9b-4211-bd23-a243ce0ee554","Type":"ContainerDied","Data":"d4cf9631195a64608c3f002c83e4f091ee13070d383c3da9feede1c63959b9ad"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.730241 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4cf9631195a64608c3f002c83e4f091ee13070d383c3da9feede1c63959b9ad" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.730285 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-29ee-account-create-update-56zfp" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.737344 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zbw76" event={"ID":"caec468e-bf72-4c93-8b47-6aac4c7a0b3d","Type":"ContainerStarted","Data":"2ab3f639f24308ca232423f0a32206d071a1ba8c33f3edef5fde8eec5d078500"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.743409 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"11476fa4e67a7467736dc9b47cc14a6a3b2a8960fb2f1a07b6d06a7794a1b35e"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746453 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c07a623-53fe-44a2-9810-5d1137c659c3-operator-scripts\") pod \"6c07a623-53fe-44a2-9810-5d1137c659c3\" (UID: \"6c07a623-53fe-44a2-9810-5d1137c659c3\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746526 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsgsl\" (UniqueName: \"kubernetes.io/projected/6c07a623-53fe-44a2-9810-5d1137c659c3-kube-api-access-wsgsl\") pod \"6c07a623-53fe-44a2-9810-5d1137c659c3\" (UID: \"6c07a623-53fe-44a2-9810-5d1137c659c3\") " Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746880 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5mhn\" (UniqueName: \"kubernetes.io/projected/13613099-2932-4476-8032-82095348fb10-kube-api-access-t5mhn\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746897 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13613099-2932-4476-8032-82095348fb10-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746907 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2392ab6f-ca9b-4211-bd23-a243ce0ee554-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746915 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f786311-b5ef-427f-b167-c49267de28c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746922 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e00abb05-5932-47c8-9bd4-34014f966013-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746932 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rq77\" (UniqueName: \"kubernetes.io/projected/e00abb05-5932-47c8-9bd4-34014f966013-kube-api-access-7rq77\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746941 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746950 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjl27\" (UniqueName: \"kubernetes.io/projected/bfa3c464-d85c-4ea1-816e-7dda86dbb9de-kube-api-access-gjl27\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746961 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv6g7\" (UniqueName: \"kubernetes.io/projected/1f786311-b5ef-427f-b167-c49267de28c6-kube-api-access-cv6g7\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.746969 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw8kt\" (UniqueName: \"kubernetes.io/projected/2392ab6f-ca9b-4211-bd23-a243ce0ee554-kube-api-access-tw8kt\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.747749 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c07a623-53fe-44a2-9810-5d1137c659c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c07a623-53fe-44a2-9810-5d1137c659c3" (UID: "6c07a623-53fe-44a2-9810-5d1137c659c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.748077 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t2ntm" event={"ID":"e00abb05-5932-47c8-9bd4-34014f966013","Type":"ContainerDied","Data":"1021ce56a65f1678d6067bce77001cc3379da23303902ddfacdf17e2cf71d0d6"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.748193 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1021ce56a65f1678d6067bce77001cc3379da23303902ddfacdf17e2cf71d0d6" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.748333 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t2ntm" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.751478 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ac9c-account-create-update-6cnjz" event={"ID":"1f786311-b5ef-427f-b167-c49267de28c6","Type":"ContainerDied","Data":"2deeaef8b972645a1d4c815ad2b00a78dfaff0b6cd39c4e7e87229596ae5df93"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.751520 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2deeaef8b972645a1d4c815ad2b00a78dfaff0b6cd39c4e7e87229596ae5df93" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.751592 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ac9c-account-create-update-6cnjz" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.758217 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-zbw76" podStartSLOduration=1.8368464100000002 podStartE2EDuration="10.758198982s" podCreationTimestamp="2026-01-30 14:05:37 +0000 UTC" firstStartedPulling="2026-01-30 14:05:38.494018105 +0000 UTC m=+1349.195366596" lastFinishedPulling="2026-01-30 14:05:47.415370677 +0000 UTC m=+1358.116719168" observedRunningTime="2026-01-30 14:05:47.756602623 +0000 UTC m=+1358.457951124" watchObservedRunningTime="2026-01-30 14:05:47.758198982 +0000 UTC m=+1358.459547473" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.763851 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3f03-account-create-update-s5gbm" event={"ID":"6c07a623-53fe-44a2-9810-5d1137c659c3","Type":"ContainerDied","Data":"ee48e1466c00be71a5cc4e94080113b3179b45afeb01e2591c730c312c7e1330"} Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.763886 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee48e1466c00be71a5cc4e94080113b3179b45afeb01e2591c730c312c7e1330" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.763940 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3f03-account-create-update-s5gbm" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.769835 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c07a623-53fe-44a2-9810-5d1137c659c3-kube-api-access-wsgsl" (OuterVolumeSpecName: "kube-api-access-wsgsl") pod "6c07a623-53fe-44a2-9810-5d1137c659c3" (UID: "6c07a623-53fe-44a2-9810-5d1137c659c3"). InnerVolumeSpecName "kube-api-access-wsgsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.848501 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c07a623-53fe-44a2-9810-5d1137c659c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:47 crc kubenswrapper[4793]: I0130 14:05:47.848769 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsgsl\" (UniqueName: \"kubernetes.io/projected/6c07a623-53fe-44a2-9810-5d1137c659c3-kube-api-access-wsgsl\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:48 crc kubenswrapper[4793]: I0130 14:05:48.776204 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"e6907cc4a13ada511c431bc65d19038e6579ea9e06b02d2113fec03a91364c05"} Jan 30 14:05:50 crc kubenswrapper[4793]: I0130 14:05:50.806535 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"fa17303ac81f2866c07d19bb2791483d673952e150dbd38aeac5b7f7eabe7145"} Jan 30 14:05:50 crc kubenswrapper[4793]: I0130 14:05:50.807115 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"d0bb2976fcd9f88d5b17ac1344e29c8a7f6f0d50d91ae2369adc070a90760ebc"} Jan 30 14:05:50 crc kubenswrapper[4793]: I0130 14:05:50.807140 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"85cc081d349d8f25d864c73f8ae1cf92b099090c00f1063588734a402ae9ab35"} Jan 30 14:05:50 crc kubenswrapper[4793]: I0130 14:05:50.807152 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"bda63c33421cef222fc8346e2b9032522aed037330fe15c10e51e24ebf14a667"} Jan 30 14:05:50 crc kubenswrapper[4793]: I0130 14:05:50.807224 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"3147d8a7ab7c1d494fe3d27290744f7596eb55fa8d698807dbfd2b3a8b2c563e"} Jan 30 14:05:51 crc kubenswrapper[4793]: I0130 14:05:51.835755 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"59900819fa6df37dc180cc6c984672f1f3438adc2e7c3ae2fcb67afa9bb927f8"} Jan 30 14:05:51 crc kubenswrapper[4793]: I0130 14:05:51.835827 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"76182868-5b55-403e-a2be-0c6879e9a2e0","Type":"ContainerStarted","Data":"002d93801806dd0f9073e76b9fe0dd9d5b2c07d7aa2f976d76b8b977cf3c98b6"} Jan 30 14:05:51 crc kubenswrapper[4793]: I0130 14:05:51.886113 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.499502482 podStartE2EDuration="50.886039499s" podCreationTimestamp="2026-01-30 14:05:01 +0000 UTC" firstStartedPulling="2026-01-30 14:05:35.454834956 +0000 UTC m=+1346.156183447" lastFinishedPulling="2026-01-30 14:05:49.841371983 +0000 UTC m=+1360.542720464" observedRunningTime="2026-01-30 14:05:51.878349441 +0000 UTC m=+1362.579697932" watchObservedRunningTime="2026-01-30 14:05:51.886039499 +0000 UTC m=+1362.587388020" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.238452 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jxcnx"] Jan 30 14:05:52 crc kubenswrapper[4793]: E0130 14:05:52.238993 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa3c464-d85c-4ea1-816e-7dda86dbb9de" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239010 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa3c464-d85c-4ea1-816e-7dda86dbb9de" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: E0130 14:05:52.239034 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2392ab6f-ca9b-4211-bd23-a243ce0ee554" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239056 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2392ab6f-ca9b-4211-bd23-a243ce0ee554" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: E0130 14:05:52.239068 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00abb05-5932-47c8-9bd4-34014f966013" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239075 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00abb05-5932-47c8-9bd4-34014f966013" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: E0130 14:05:52.239083 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c07a623-53fe-44a2-9810-5d1137c659c3" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239089 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c07a623-53fe-44a2-9810-5d1137c659c3" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: E0130 14:05:52.239100 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f786311-b5ef-427f-b167-c49267de28c6" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239105 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f786311-b5ef-427f-b167-c49267de28c6" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: E0130 14:05:52.239153 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13613099-2932-4476-8032-82095348fb10" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239159 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="13613099-2932-4476-8032-82095348fb10" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239302 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa3c464-d85c-4ea1-816e-7dda86dbb9de" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239324 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2392ab6f-ca9b-4211-bd23-a243ce0ee554" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239334 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c07a623-53fe-44a2-9810-5d1137c659c3" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239360 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f786311-b5ef-427f-b167-c49267de28c6" containerName="mariadb-account-create-update" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239379 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e00abb05-5932-47c8-9bd4-34014f966013" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.239393 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="13613099-2932-4476-8032-82095348fb10" containerName="mariadb-database-create" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.240210 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.244653 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.255574 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jxcnx"] Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.332836 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.333173 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-config\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.333314 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.333603 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-svc\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.333695 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.333802 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppqxv\" (UniqueName: \"kubernetes.io/projected/d503f433-f37b-45ed-a7e5-fc845b97e985-kube-api-access-ppqxv\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.435502 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.435591 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-config\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.435653 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.435679 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-svc\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.435696 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.435721 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppqxv\" (UniqueName: \"kubernetes.io/projected/d503f433-f37b-45ed-a7e5-fc845b97e985-kube-api-access-ppqxv\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.436481 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.436615 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-config\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.436841 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-svc\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.437091 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.437542 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.454393 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppqxv\" (UniqueName: \"kubernetes.io/projected/d503f433-f37b-45ed-a7e5-fc845b97e985-kube-api-access-ppqxv\") pod \"dnsmasq-dns-764c5664d7-jxcnx\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.554028 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.829331 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jxcnx"] Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.844502 4793 generic.go:334] "Generic (PLEG): container finished" podID="caec468e-bf72-4c93-8b47-6aac4c7a0b3d" containerID="2ab3f639f24308ca232423f0a32206d071a1ba8c33f3edef5fde8eec5d078500" exitCode=0 Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.844612 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zbw76" event={"ID":"caec468e-bf72-4c93-8b47-6aac4c7a0b3d","Type":"ContainerDied","Data":"2ab3f639f24308ca232423f0a32206d071a1ba8c33f3edef5fde8eec5d078500"} Jan 30 14:05:52 crc kubenswrapper[4793]: I0130 14:05:52.846460 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" event={"ID":"d503f433-f37b-45ed-a7e5-fc845b97e985","Type":"ContainerStarted","Data":"bb05c1a5e71872db9d1f0feebcb1261f0a0b54ef70c537588201ef29f3f19c4c"} Jan 30 14:05:53 crc kubenswrapper[4793]: I0130 14:05:53.854311 4793 generic.go:334] "Generic (PLEG): container finished" podID="2b977757-3d3e-48e5-a1e2-d31ebeda138e" containerID="aba07025654ae635089a8f296dddf9cfb274c709f33abf63aa5399408783166c" exitCode=0 Jan 30 14:05:53 crc kubenswrapper[4793]: I0130 14:05:53.854395 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-btxs9" event={"ID":"2b977757-3d3e-48e5-a1e2-d31ebeda138e","Type":"ContainerDied","Data":"aba07025654ae635089a8f296dddf9cfb274c709f33abf63aa5399408783166c"} Jan 30 14:05:53 crc kubenswrapper[4793]: I0130 14:05:53.857183 4793 generic.go:334] "Generic (PLEG): container finished" podID="d503f433-f37b-45ed-a7e5-fc845b97e985" containerID="d4cf0d819a831c4b22d621ad832e53fd5393704103774f332bf0ecbe457050ee" exitCode=0 Jan 30 14:05:53 crc kubenswrapper[4793]: I0130 14:05:53.857254 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" event={"ID":"d503f433-f37b-45ed-a7e5-fc845b97e985","Type":"ContainerDied","Data":"d4cf0d819a831c4b22d621ad832e53fd5393704103774f332bf0ecbe457050ee"} Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.113158 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.163120 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-config-data\") pod \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.163252 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcb5r\" (UniqueName: \"kubernetes.io/projected/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-kube-api-access-xcb5r\") pod \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.163321 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-combined-ca-bundle\") pod \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\" (UID: \"caec468e-bf72-4c93-8b47-6aac4c7a0b3d\") " Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.168367 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-kube-api-access-xcb5r" (OuterVolumeSpecName: "kube-api-access-xcb5r") pod "caec468e-bf72-4c93-8b47-6aac4c7a0b3d" (UID: "caec468e-bf72-4c93-8b47-6aac4c7a0b3d"). InnerVolumeSpecName "kube-api-access-xcb5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.186547 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caec468e-bf72-4c93-8b47-6aac4c7a0b3d" (UID: "caec468e-bf72-4c93-8b47-6aac4c7a0b3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.212432 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-config-data" (OuterVolumeSpecName: "config-data") pod "caec468e-bf72-4c93-8b47-6aac4c7a0b3d" (UID: "caec468e-bf72-4c93-8b47-6aac4c7a0b3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.265015 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcb5r\" (UniqueName: \"kubernetes.io/projected/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-kube-api-access-xcb5r\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.265073 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.265083 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caec468e-bf72-4c93-8b47-6aac4c7a0b3d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.868766 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zbw76" event={"ID":"caec468e-bf72-4c93-8b47-6aac4c7a0b3d","Type":"ContainerDied","Data":"73bb4553c0d51c829203402dacc690b0897fb164b96704ad8590b84c04119a3c"} Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.868844 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73bb4553c0d51c829203402dacc690b0897fb164b96704ad8590b84c04119a3c" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.868801 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zbw76" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.871884 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" event={"ID":"d503f433-f37b-45ed-a7e5-fc845b97e985","Type":"ContainerStarted","Data":"80569e834327346f4a6679f3be59a9d590633f158c858f69eb9e397080c34f24"} Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.871955 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:54 crc kubenswrapper[4793]: I0130 14:05:54.914492 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" podStartSLOduration=2.914466144 podStartE2EDuration="2.914466144s" podCreationTimestamp="2026-01-30 14:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:54.905164277 +0000 UTC m=+1365.606512788" watchObservedRunningTime="2026-01-30 14:05:54.914466144 +0000 UTC m=+1365.615814645" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.256928 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jxcnx"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.287211 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-p79cl"] Jan 30 14:05:55 crc kubenswrapper[4793]: E0130 14:05:55.287739 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caec468e-bf72-4c93-8b47-6aac4c7a0b3d" containerName="keystone-db-sync" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.287758 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="caec468e-bf72-4c93-8b47-6aac4c7a0b3d" containerName="keystone-db-sync" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.287929 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="caec468e-bf72-4c93-8b47-6aac4c7a0b3d" containerName="keystone-db-sync" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.288562 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.295658 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.295863 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.296026 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nv6pf" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.296276 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.296392 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.323880 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p79cl"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.339407 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tnbbm"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.340742 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.372996 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tnbbm"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.390575 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zlxm\" (UniqueName: \"kubernetes.io/projected/e6a668ba-7440-4eb2-ba94-29c9f1916625-kube-api-access-9zlxm\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.390622 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.390666 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-config\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.390685 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-fernet-keys\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.390722 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.390758 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.390993 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-credential-keys\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.391119 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5mrl\" (UniqueName: \"kubernetes.io/projected/8195589a-9117-4f82-875b-1e0deec11c01-kube-api-access-t5mrl\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.391200 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-config-data\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.391279 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-scripts\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.391349 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-combined-ca-bundle\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.391443 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-svc\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.492988 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.493921 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494101 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494777 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-credential-keys\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494814 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5mrl\" (UniqueName: \"kubernetes.io/projected/8195589a-9117-4f82-875b-1e0deec11c01-kube-api-access-t5mrl\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494838 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-config-data\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494849 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494898 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-scripts\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494937 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-combined-ca-bundle\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494978 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-svc\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.494997 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zlxm\" (UniqueName: \"kubernetes.io/projected/e6a668ba-7440-4eb2-ba94-29c9f1916625-kube-api-access-9zlxm\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.495015 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.495116 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-config\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.495137 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-fernet-keys\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.497041 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-config\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.499306 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-svc\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.508690 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-config-data\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.514698 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-credential-keys\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.515514 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-fernet-keys\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.518401 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-combined-ca-bundle\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.510337 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-scripts\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.518721 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.557709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5mrl\" (UniqueName: \"kubernetes.io/projected/8195589a-9117-4f82-875b-1e0deec11c01-kube-api-access-t5mrl\") pod \"keystone-bootstrap-p79cl\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.559393 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zlxm\" (UniqueName: \"kubernetes.io/projected/e6a668ba-7440-4eb2-ba94-29c9f1916625-kube-api-access-9zlxm\") pod \"dnsmasq-dns-5959f8865f-tnbbm\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.624917 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.634657 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-787bd77877-l9df5"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.636130 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.656897 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-787bd77877-l9df5"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.660668 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.690028 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-f5qx4" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.695791 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.696036 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.696595 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.739461 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-scripts\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.739522 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtnhg\" (UniqueName: \"kubernetes.io/projected/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-kube-api-access-vtnhg\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.739558 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-logs\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.739630 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-horizon-secret-key\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.739649 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-config-data\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.831685 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-gpt4t"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.832730 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.837653 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.840698 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtnhg\" (UniqueName: \"kubernetes.io/projected/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-kube-api-access-vtnhg\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.840755 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-logs\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.840820 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-horizon-secret-key\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.840839 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-config-data\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.840878 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-scripts\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.841598 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-scripts\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.841811 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-logs\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.842892 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-config-data\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.844744 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2b9wh" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.853960 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-horizon-secret-key\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.878258 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-gpt4t"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.916338 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tnbbm"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.929708 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtnhg\" (UniqueName: \"kubernetes.io/projected/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-kube-api-access-vtnhg\") pod \"horizon-787bd77877-l9df5\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.946872 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-combined-ca-bundle\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.946985 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr8nv\" (UniqueName: \"kubernetes.io/projected/126207f4-9b13-4892-aa15-0616a488af8c-kube-api-access-sr8nv\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.947017 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-db-sync-config-data\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.972189 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-4rknj"] Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.973293 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.978631 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.978815 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5kb4p" Jan 30 14:05:55 crc kubenswrapper[4793]: I0130 14:05:55.979165 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.002589 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-kkrt6"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.005259 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.013875 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.014086 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8krj5" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.014210 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.028992 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kkrt6"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.047783 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-kbrx4"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048326 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-db-sync-config-data\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048359 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-combined-ca-bundle\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048381 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-config-data\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048448 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-etc-machine-id\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048479 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkv5g\" (UniqueName: \"kubernetes.io/projected/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-kube-api-access-gkv5g\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048511 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-db-sync-config-data\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048525 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr8nv\" (UniqueName: \"kubernetes.io/projected/126207f4-9b13-4892-aa15-0616a488af8c-kube-api-access-sr8nv\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048560 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-scripts\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.048577 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-combined-ca-bundle\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.049512 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.079543 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.081681 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-combined-ca-bundle\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.088290 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-db-sync-config-data\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.103969 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr8nv\" (UniqueName: \"kubernetes.io/projected/126207f4-9b13-4892-aa15-0616a488af8c-kube-api-access-sr8nv\") pod \"barbican-db-sync-gpt4t\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.104025 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4rknj"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.139333 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9k2k7"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.139467 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:05:56 crc kubenswrapper[4793]: E0130 14:05:56.139770 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b977757-3d3e-48e5-a1e2-d31ebeda138e" containerName="glance-db-sync" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.139784 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b977757-3d3e-48e5-a1e2-d31ebeda138e" containerName="glance-db-sync" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.139965 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b977757-3d3e-48e5-a1e2-d31ebeda138e" containerName="glance-db-sync" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.140591 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.151811 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-db-sync-config-data\") pod \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.151862 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-combined-ca-bundle\") pod \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.151940 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-config-data\") pod \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152099 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bt5j\" (UniqueName: \"kubernetes.io/projected/2b977757-3d3e-48e5-a1e2-d31ebeda138e-kube-api-access-6bt5j\") pod \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\" (UID: \"2b977757-3d3e-48e5-a1e2-d31ebeda138e\") " Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152389 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-db-sync-config-data\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152427 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-config-data\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152493 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152522 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644bf4c3-aaaf-45fa-9692-73406a657226-logs\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152548 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152568 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-etc-machine-id\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152597 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd7h4\" (UniqueName: \"kubernetes.io/projected/644bf4c3-aaaf-45fa-9692-73406a657226-kube-api-access-gd7h4\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152612 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkv5g\" (UniqueName: \"kubernetes.io/projected/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-kube-api-access-gkv5g\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152630 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152651 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-scripts\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152667 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-combined-ca-bundle\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152682 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-scripts\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152696 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-combined-ca-bundle\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152722 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-config\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152737 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152753 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqzsn\" (UniqueName: \"kubernetes.io/projected/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-kube-api-access-sqzsn\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.152779 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-config-data\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.159543 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.159794 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.160136 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-brjvn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.161944 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-kbrx4"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.168869 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-scripts\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.173495 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-etc-machine-id\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.183773 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-combined-ca-bundle\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.183770 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-db-sync-config-data\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.183966 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2b977757-3d3e-48e5-a1e2-d31ebeda138e" (UID: "2b977757-3d3e-48e5-a1e2-d31ebeda138e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.184557 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-config-data\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.198287 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b977757-3d3e-48e5-a1e2-d31ebeda138e-kube-api-access-6bt5j" (OuterVolumeSpecName: "kube-api-access-6bt5j") pod "2b977757-3d3e-48e5-a1e2-d31ebeda138e" (UID: "2b977757-3d3e-48e5-a1e2-d31ebeda138e"). InnerVolumeSpecName "kube-api-access-6bt5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.200730 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9k2k7"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.208434 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.243679 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkv5g\" (UniqueName: \"kubernetes.io/projected/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-kube-api-access-gkv5g\") pod \"cinder-db-sync-4rknj\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.262924 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-config\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.262957 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb7n6\" (UniqueName: \"kubernetes.io/projected/16a2a816-c28c-4d74-848a-2821a9d68d70-kube-api-access-mb7n6\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.262975 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.262998 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqzsn\" (UniqueName: \"kubernetes.io/projected/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-kube-api-access-sqzsn\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263022 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-combined-ca-bundle\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263061 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-config-data\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263084 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-config\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263149 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263171 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644bf4c3-aaaf-45fa-9692-73406a657226-logs\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263197 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263240 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd7h4\" (UniqueName: \"kubernetes.io/projected/644bf4c3-aaaf-45fa-9692-73406a657226-kube-api-access-gd7h4\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263261 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263283 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-scripts\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263300 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-combined-ca-bundle\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263341 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bt5j\" (UniqueName: \"kubernetes.io/projected/2b977757-3d3e-48e5-a1e2-d31ebeda138e-kube-api-access-6bt5j\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.263352 4793 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.264984 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-config\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.265555 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.265889 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.266949 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.280835 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644bf4c3-aaaf-45fa-9692-73406a657226-logs\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.281650 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-combined-ca-bundle\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.287157 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.290640 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.295210 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.307283 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-scripts\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.315843 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4rknj" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.316776 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.318126 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-config-data\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.318329 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b977757-3d3e-48e5-a1e2-d31ebeda138e" (UID: "2b977757-3d3e-48e5-a1e2-d31ebeda138e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.319638 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.319937 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd7h4\" (UniqueName: \"kubernetes.io/projected/644bf4c3-aaaf-45fa-9692-73406a657226-kube-api-access-gd7h4\") pod \"placement-db-sync-kkrt6\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.338659 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqzsn\" (UniqueName: \"kubernetes.io/projected/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-kube-api-access-sqzsn\") pod \"dnsmasq-dns-58dd9ff6bc-kbrx4\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.371174 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kkrt6" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374026 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-run-httpd\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374101 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-log-httpd\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374159 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-scripts\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374185 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-config-data\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374205 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374246 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb7n6\" (UniqueName: \"kubernetes.io/projected/16a2a816-c28c-4d74-848a-2821a9d68d70-kube-api-access-mb7n6\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374283 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-combined-ca-bundle\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374300 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374324 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sld6q\" (UniqueName: \"kubernetes.io/projected/f85d7b0d-5452-4175-842b-7d1505eb82e0-kube-api-access-sld6q\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374357 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-config\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.374483 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.383402 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.389933 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-combined-ca-bundle\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.408905 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.420653 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb7n6\" (UniqueName: \"kubernetes.io/projected/16a2a816-c28c-4d74-848a-2821a9d68d70-kube-api-access-mb7n6\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.421521 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-config\") pod \"neutron-db-sync-9k2k7\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.448424 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-config-data" (OuterVolumeSpecName: "config-data") pod "2b977757-3d3e-48e5-a1e2-d31ebeda138e" (UID: "2b977757-3d3e-48e5-a1e2-d31ebeda138e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.450950 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8698dbdc7f-7rwcn"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.452388 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.471499 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8698dbdc7f-7rwcn"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483223 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-run-httpd\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483328 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-log-httpd\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483445 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-scripts\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483475 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-config-data\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483502 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483603 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mcgn\" (UniqueName: \"kubernetes.io/projected/1f30f95a-540c-4e30-acce-229ae81b4215-kube-api-access-7mcgn\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483660 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483710 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sld6q\" (UniqueName: \"kubernetes.io/projected/f85d7b0d-5452-4175-842b-7d1505eb82e0-kube-api-access-sld6q\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483811 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f30f95a-540c-4e30-acce-229ae81b4215-logs\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483860 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-scripts\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483897 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-config-data\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483924 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f30f95a-540c-4e30-acce-229ae81b4215-horizon-secret-key\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.483989 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b977757-3d3e-48e5-a1e2-d31ebeda138e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.484765 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-run-httpd\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.485965 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-log-httpd\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.491572 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-scripts\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.491912 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.494361 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.501502 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-config-data\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.507987 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.585692 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f30f95a-540c-4e30-acce-229ae81b4215-logs\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.591570 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-scripts\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.603708 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-config-data\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.603885 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f30f95a-540c-4e30-acce-229ae81b4215-horizon-secret-key\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.604192 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mcgn\" (UniqueName: \"kubernetes.io/projected/1f30f95a-540c-4e30-acce-229ae81b4215-kube-api-access-7mcgn\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.589693 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sld6q\" (UniqueName: \"kubernetes.io/projected/f85d7b0d-5452-4175-842b-7d1505eb82e0-kube-api-access-sld6q\") pod \"ceilometer-0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.586571 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f30f95a-540c-4e30-acce-229ae81b4215-logs\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.606952 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-config-data\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.596814 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-scripts\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.630492 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.632796 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f30f95a-540c-4e30-acce-229ae81b4215-horizon-secret-key\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.642835 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mcgn\" (UniqueName: \"kubernetes.io/projected/1f30f95a-540c-4e30-acce-229ae81b4215-kube-api-access-7mcgn\") pod \"horizon-8698dbdc7f-7rwcn\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.796861 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tnbbm"] Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.803841 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.964529 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-btxs9" event={"ID":"2b977757-3d3e-48e5-a1e2-d31ebeda138e","Type":"ContainerDied","Data":"7ef1978da215da441ac8cf72de6c6774bfd0f063eea75236ae6171402912d11b"} Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.964580 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ef1978da215da441ac8cf72de6c6774bfd0f063eea75236ae6171402912d11b" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.964670 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-btxs9" Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.995098 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" podUID="d503f433-f37b-45ed-a7e5-fc845b97e985" containerName="dnsmasq-dns" containerID="cri-o://80569e834327346f4a6679f3be59a9d590633f158c858f69eb9e397080c34f24" gracePeriod=10 Jan 30 14:05:56 crc kubenswrapper[4793]: I0130 14:05:56.995222 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" event={"ID":"e6a668ba-7440-4eb2-ba94-29c9f1916625","Type":"ContainerStarted","Data":"c2a515cc3d3f339a5e32e30b902a887bb34f4e6875238ac55c8088138646231b"} Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.123584 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p79cl"] Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.370392 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-gpt4t"] Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.454954 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4rknj"] Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.694087 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-kbrx4"] Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.735825 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kkrt6"] Jan 30 14:05:57 crc kubenswrapper[4793]: W0130 14:05:57.752812 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod056322cc_65a1_41ad_84a8_a01c8b7e2ac3.slice/crio-2b23e0d92930d14490b62a976bcd1c55e52803bb1166bdf22fd572ab7384aac5 WatchSource:0}: Error finding container 2b23e0d92930d14490b62a976bcd1c55e52803bb1166bdf22fd572ab7384aac5: Status 404 returned error can't find the container with id 2b23e0d92930d14490b62a976bcd1c55e52803bb1166bdf22fd572ab7384aac5 Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.909122 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-kbrx4"] Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.930867 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zbt8c"] Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.937559 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:57 crc kubenswrapper[4793]: I0130 14:05:57.951543 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9k2k7"] Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:57.988919 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zbt8c"] Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:57.988953 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-787bd77877-l9df5"] Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.006227 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.096553 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ptwm\" (UniqueName: \"kubernetes.io/projected/b318d131-c8b9-41a5-a500-f8a9405e0074-kube-api-access-6ptwm\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.096803 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.096836 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.096871 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.096894 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.096923 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-config\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.117353 4793 generic.go:334] "Generic (PLEG): container finished" podID="d503f433-f37b-45ed-a7e5-fc845b97e985" containerID="80569e834327346f4a6679f3be59a9d590633f158c858f69eb9e397080c34f24" exitCode=0 Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.117438 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" event={"ID":"d503f433-f37b-45ed-a7e5-fc845b97e985","Type":"ContainerDied","Data":"80569e834327346f4a6679f3be59a9d590633f158c858f69eb9e397080c34f24"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.137770 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kkrt6" event={"ID":"644bf4c3-aaaf-45fa-9692-73406a657226","Type":"ContainerStarted","Data":"b3e8e1acd1cd561d606e595452b7ed4d9ad040eaf08a66d7af08e7308d6d261e"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.180238 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8698dbdc7f-7rwcn"] Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.183104 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" event={"ID":"056322cc-65a1-41ad-84a8-a01c8b7e2ac3","Type":"ContainerStarted","Data":"2b23e0d92930d14490b62a976bcd1c55e52803bb1166bdf22fd572ab7384aac5"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.207414 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-config\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.207774 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ptwm\" (UniqueName: \"kubernetes.io/projected/b318d131-c8b9-41a5-a500-f8a9405e0074-kube-api-access-6ptwm\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.207814 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.207883 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.207954 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.208012 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.209008 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.209279 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.209530 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-config\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.210023 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.210532 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.231814 4793 generic.go:334] "Generic (PLEG): container finished" podID="e6a668ba-7440-4eb2-ba94-29c9f1916625" containerID="15d506971acedaa7bb99095c847196af33271345f5a9e05340688d33bdaff291" exitCode=0 Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.231883 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" event={"ID":"e6a668ba-7440-4eb2-ba94-29c9f1916625","Type":"ContainerDied","Data":"15d506971acedaa7bb99095c847196af33271345f5a9e05340688d33bdaff291"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.234895 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9k2k7" event={"ID":"16a2a816-c28c-4d74-848a-2821a9d68d70","Type":"ContainerStarted","Data":"fc613fe2ad6c1be056bd77d206032a6320f75af4b1f9de343208058c0b3d8709"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.256622 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ptwm\" (UniqueName: \"kubernetes.io/projected/b318d131-c8b9-41a5-a500-f8a9405e0074-kube-api-access-6ptwm\") pod \"dnsmasq-dns-785d8bcb8c-zbt8c\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.296732 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-787bd77877-l9df5" event={"ID":"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88","Type":"ContainerStarted","Data":"0b3a3424f23b7d6c10b04af0639314688a591e4cf45a995b12aa2a751c3d037b"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.318187 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p79cl" event={"ID":"8195589a-9117-4f82-875b-1e0deec11c01","Type":"ContainerStarted","Data":"c0abfc20236991093d7e8e2afcdd95243ff40e4122ba5c47744049c4a654a438"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.318247 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p79cl" event={"ID":"8195589a-9117-4f82-875b-1e0deec11c01","Type":"ContainerStarted","Data":"0235cbe667410a12fd0f43900b65c18ce6c6b1f1487e76a077fc7aad8e3b66de"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.373484 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4rknj" event={"ID":"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd","Type":"ContainerStarted","Data":"6d4763986d1b4a11b99da97ae431575d2b3082d3a2bdcdbedb9c248948af623d"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.377336 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gpt4t" event={"ID":"126207f4-9b13-4892-aa15-0616a488af8c","Type":"ContainerStarted","Data":"951aaae1b3a62ddc2954a80d0b215b523c731d1bf004dc9a3391b04cbf64290b"} Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.400309 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-p79cl" podStartSLOduration=3.400289783 podStartE2EDuration="3.400289783s" podCreationTimestamp="2026-01-30 14:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:58.366648558 +0000 UTC m=+1369.067997049" watchObservedRunningTime="2026-01-30 14:05:58.400289783 +0000 UTC m=+1369.101638274" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.416021 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.502850 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.664191 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-svc\") pod \"d503f433-f37b-45ed-a7e5-fc845b97e985\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.664239 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-sb\") pod \"d503f433-f37b-45ed-a7e5-fc845b97e985\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.664357 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppqxv\" (UniqueName: \"kubernetes.io/projected/d503f433-f37b-45ed-a7e5-fc845b97e985-kube-api-access-ppqxv\") pod \"d503f433-f37b-45ed-a7e5-fc845b97e985\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.664425 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-nb\") pod \"d503f433-f37b-45ed-a7e5-fc845b97e985\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.664560 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-config\") pod \"d503f433-f37b-45ed-a7e5-fc845b97e985\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.664583 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-swift-storage-0\") pod \"d503f433-f37b-45ed-a7e5-fc845b97e985\" (UID: \"d503f433-f37b-45ed-a7e5-fc845b97e985\") " Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.686238 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d503f433-f37b-45ed-a7e5-fc845b97e985-kube-api-access-ppqxv" (OuterVolumeSpecName: "kube-api-access-ppqxv") pod "d503f433-f37b-45ed-a7e5-fc845b97e985" (UID: "d503f433-f37b-45ed-a7e5-fc845b97e985"). InnerVolumeSpecName "kube-api-access-ppqxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.766786 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppqxv\" (UniqueName: \"kubernetes.io/projected/d503f433-f37b-45ed-a7e5-fc845b97e985-kube-api-access-ppqxv\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.776254 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:05:58 crc kubenswrapper[4793]: E0130 14:05:58.777635 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d503f433-f37b-45ed-a7e5-fc845b97e985" containerName="init" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.777655 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d503f433-f37b-45ed-a7e5-fc845b97e985" containerName="init" Jan 30 14:05:58 crc kubenswrapper[4793]: E0130 14:05:58.777672 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d503f433-f37b-45ed-a7e5-fc845b97e985" containerName="dnsmasq-dns" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.777678 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d503f433-f37b-45ed-a7e5-fc845b97e985" containerName="dnsmasq-dns" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.780563 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="d503f433-f37b-45ed-a7e5-fc845b97e985" containerName="dnsmasq-dns" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.784520 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.812606 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jb79g" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.821319 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.821511 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.866595 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.869760 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d503f433-f37b-45ed-a7e5-fc845b97e985" (UID: "d503f433-f37b-45ed-a7e5-fc845b97e985"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.879783 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.885420 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d503f433-f37b-45ed-a7e5-fc845b97e985" (UID: "d503f433-f37b-45ed-a7e5-fc845b97e985"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.893453 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d503f433-f37b-45ed-a7e5-fc845b97e985" (UID: "d503f433-f37b-45ed-a7e5-fc845b97e985"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:58 crc kubenswrapper[4793]: I0130 14:05:58.969193 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-config" (OuterVolumeSpecName: "config") pod "d503f433-f37b-45ed-a7e5-fc845b97e985" (UID: "d503f433-f37b-45ed-a7e5-fc845b97e985"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.017353 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.017654 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-logs\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.017730 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggxjm\" (UniqueName: \"kubernetes.io/projected/95920882-93c3-4a03-bfc1-cfeaeef10bd6-kube-api-access-ggxjm\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.017749 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-config-data\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.017794 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.017850 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.017972 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-scripts\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.018028 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.018072 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.018083 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.019465 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d503f433-f37b-45ed-a7e5-fc845b97e985" (UID: "d503f433-f37b-45ed-a7e5-fc845b97e985"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.061759 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:05:59 crc kubenswrapper[4793]: E0130 14:05:59.062401 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data glance httpd-run kube-api-access-ggxjm logs scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-external-api-0" podUID="95920882-93c3-4a03-bfc1-cfeaeef10bd6" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.101980 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-787bd77877-l9df5"] Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.121135 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggxjm\" (UniqueName: \"kubernetes.io/projected/95920882-93c3-4a03-bfc1-cfeaeef10bd6-kube-api-access-ggxjm\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.121349 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-config-data\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.121430 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.121537 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.122199 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-scripts\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.122325 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.122402 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-logs\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.122518 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d503f433-f37b-45ed-a7e5-fc845b97e985-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.122745 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.123120 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.126957 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-logs\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.127727 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.132934 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-scripts\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.136020 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-config-data\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.184734 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggxjm\" (UniqueName: \"kubernetes.io/projected/95920882-93c3-4a03-bfc1-cfeaeef10bd6-kube-api-access-ggxjm\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.189267 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.204109 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b66cd9fcf-c94kp"] Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.207347 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.247761 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b66cd9fcf-c94kp"] Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.300025 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.304837 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.323613 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.329886 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wstbg\" (UniqueName: \"kubernetes.io/projected/ecab991a-220f-4b09-a1fa-f43fef3d0be5-kube-api-access-wstbg\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.329962 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-config-data\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.329991 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ecab991a-220f-4b09-a1fa-f43fef3d0be5-horizon-secret-key\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.330030 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-scripts\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.330087 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecab991a-220f-4b09-a1fa-f43fef3d0be5-logs\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.349895 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.407142 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.433407 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.433613 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-config-data\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.433726 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ecab991a-220f-4b09-a1fa-f43fef3d0be5-horizon-secret-key\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.433808 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.433874 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.433948 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-scripts\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.434029 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-logs\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.434118 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.434193 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecab991a-220f-4b09-a1fa-f43fef3d0be5-logs\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.434272 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt5hf\" (UniqueName: \"kubernetes.io/projected/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-kube-api-access-tt5hf\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.434350 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.434423 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wstbg\" (UniqueName: \"kubernetes.io/projected/ecab991a-220f-4b09-a1fa-f43fef3d0be5-kube-api-access-wstbg\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.434705 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-config-data\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.435188 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecab991a-220f-4b09-a1fa-f43fef3d0be5-logs\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.435789 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-scripts\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.443929 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ecab991a-220f-4b09-a1fa-f43fef3d0be5-horizon-secret-key\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.474295 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.474431 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85d7b0d-5452-4175-842b-7d1505eb82e0","Type":"ContainerStarted","Data":"50cb694f90f1d6a53f515af750afb638a61a81c6b156cbc3d6081c5686d9e08c"} Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.496601 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wstbg\" (UniqueName: \"kubernetes.io/projected/ecab991a-220f-4b09-a1fa-f43fef3d0be5-kube-api-access-wstbg\") pod \"horizon-6b66cd9fcf-c94kp\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.514022 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" event={"ID":"d503f433-f37b-45ed-a7e5-fc845b97e985","Type":"ContainerDied","Data":"bb05c1a5e71872db9d1f0feebcb1261f0a0b54ef70c537588201ef29f3f19c4c"} Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.514246 4793 scope.go:117] "RemoveContainer" containerID="80569e834327346f4a6679f3be59a9d590633f158c858f69eb9e397080c34f24" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.514454 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-jxcnx" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.542619 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-sb\") pod \"e6a668ba-7440-4eb2-ba94-29c9f1916625\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.542811 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-config\") pod \"e6a668ba-7440-4eb2-ba94-29c9f1916625\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.542868 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-svc\") pod \"e6a668ba-7440-4eb2-ba94-29c9f1916625\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.542891 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-nb\") pod \"e6a668ba-7440-4eb2-ba94-29c9f1916625\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543007 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-swift-storage-0\") pod \"e6a668ba-7440-4eb2-ba94-29c9f1916625\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543057 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zlxm\" (UniqueName: \"kubernetes.io/projected/e6a668ba-7440-4eb2-ba94-29c9f1916625-kube-api-access-9zlxm\") pod \"e6a668ba-7440-4eb2-ba94-29c9f1916625\" (UID: \"e6a668ba-7440-4eb2-ba94-29c9f1916625\") " Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543314 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543337 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543387 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-logs\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543403 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543444 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt5hf\" (UniqueName: \"kubernetes.io/projected/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-kube-api-access-tt5hf\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543478 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.543524 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.547795 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.557288 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-logs\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.577720 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.599187 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.600435 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.608359 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6a668ba-7440-4eb2-ba94-29c9f1916625-kube-api-access-9zlxm" (OuterVolumeSpecName: "kube-api-access-9zlxm") pod "e6a668ba-7440-4eb2-ba94-29c9f1916625" (UID: "e6a668ba-7440-4eb2-ba94-29c9f1916625"). InnerVolumeSpecName "kube-api-access-9zlxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.611372 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9k2k7" event={"ID":"16a2a816-c28c-4d74-848a-2821a9d68d70","Type":"ContainerStarted","Data":"3517173292e25a5ef43fbeee36943507781e2a1f6b290f89494c3211b1e796ba"} Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.636124 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.641134 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.651214 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zlxm\" (UniqueName: \"kubernetes.io/projected/e6a668ba-7440-4eb2-ba94-29c9f1916625-kube-api-access-9zlxm\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.666483 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zbt8c"] Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.707368 4793 generic.go:334] "Generic (PLEG): container finished" podID="056322cc-65a1-41ad-84a8-a01c8b7e2ac3" containerID="baf53c748c6a6992b01298fe55003ed2cd87ea55e116f674ef10391d191eb4a2" exitCode=0 Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.707472 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" event={"ID":"056322cc-65a1-41ad-84a8-a01c8b7e2ac3","Type":"ContainerDied","Data":"baf53c748c6a6992b01298fe55003ed2cd87ea55e116f674ef10391d191eb4a2"} Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.721126 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8698dbdc7f-7rwcn" event={"ID":"1f30f95a-540c-4e30-acce-229ae81b4215","Type":"ContainerStarted","Data":"195ee6e5e0794333cda4ea233faeb9fe7d4329bd8a1e2d492ad5c4a6790f9c89"} Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.721271 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.768802 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt5hf\" (UniqueName: \"kubernetes.io/projected/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-kube-api-access-tt5hf\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.800804 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9k2k7" podStartSLOduration=4.800781967 podStartE2EDuration="4.800781967s" podCreationTimestamp="2026-01-30 14:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:05:59.677190137 +0000 UTC m=+1370.378538638" watchObservedRunningTime="2026-01-30 14:05:59.800781967 +0000 UTC m=+1370.502130468" Jan 30 14:05:59 crc kubenswrapper[4793]: W0130 14:05:59.835165 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb318d131_c8b9_41a5_a500_f8a9405e0074.slice/crio-de747f3964ebf14001721dc6443bbc5eded45594ed34eae45ced08a6517ebd85 WatchSource:0}: Error finding container de747f3964ebf14001721dc6443bbc5eded45594ed34eae45ced08a6517ebd85: Status 404 returned error can't find the container with id de747f3964ebf14001721dc6443bbc5eded45594ed34eae45ced08a6517ebd85 Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.848982 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e6a668ba-7440-4eb2-ba94-29c9f1916625" (UID: "e6a668ba-7440-4eb2-ba94-29c9f1916625"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.891123 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.899645 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e6a668ba-7440-4eb2-ba94-29c9f1916625" (UID: "e6a668ba-7440-4eb2-ba94-29c9f1916625"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.929387 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e6a668ba-7440-4eb2-ba94-29c9f1916625" (UID: "e6a668ba-7440-4eb2-ba94-29c9f1916625"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:05:59 crc kubenswrapper[4793]: I0130 14:05:59.947153 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:05:59.993172 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:05:59.993201 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.037011 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-config" (OuterVolumeSpecName: "config") pod "e6a668ba-7440-4eb2-ba94-29c9f1916625" (UID: "e6a668ba-7440-4eb2-ba94-29c9f1916625"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.063236 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e6a668ba-7440-4eb2-ba94-29c9f1916625" (UID: "e6a668ba-7440-4eb2-ba94-29c9f1916625"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.097161 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.097346 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a668ba-7440-4eb2-ba94-29c9f1916625-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.107599 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.117718 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jxcnx"] Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.133460 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jxcnx"] Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.143092 4793 scope.go:117] "RemoveContainer" containerID="d4cf0d819a831c4b22d621ad832e53fd5393704103774f332bf0ecbe457050ee" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.200616 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-scripts\") pod \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.200661 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-config-data\") pod \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.200762 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-logs\") pod \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.200802 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-httpd-run\") pod \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.200880 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.200926 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-combined-ca-bundle\") pod \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.200955 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggxjm\" (UniqueName: \"kubernetes.io/projected/95920882-93c3-4a03-bfc1-cfeaeef10bd6-kube-api-access-ggxjm\") pod \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\" (UID: \"95920882-93c3-4a03-bfc1-cfeaeef10bd6\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.201389 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "95920882-93c3-4a03-bfc1-cfeaeef10bd6" (UID: "95920882-93c3-4a03-bfc1-cfeaeef10bd6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.203911 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-logs" (OuterVolumeSpecName: "logs") pod "95920882-93c3-4a03-bfc1-cfeaeef10bd6" (UID: "95920882-93c3-4a03-bfc1-cfeaeef10bd6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.211255 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "95920882-93c3-4a03-bfc1-cfeaeef10bd6" (UID: "95920882-93c3-4a03-bfc1-cfeaeef10bd6"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.211804 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95920882-93c3-4a03-bfc1-cfeaeef10bd6-kube-api-access-ggxjm" (OuterVolumeSpecName: "kube-api-access-ggxjm") pod "95920882-93c3-4a03-bfc1-cfeaeef10bd6" (UID: "95920882-93c3-4a03-bfc1-cfeaeef10bd6"). InnerVolumeSpecName "kube-api-access-ggxjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.218363 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-scripts" (OuterVolumeSpecName: "scripts") pod "95920882-93c3-4a03-bfc1-cfeaeef10bd6" (UID: "95920882-93c3-4a03-bfc1-cfeaeef10bd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.221226 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95920882-93c3-4a03-bfc1-cfeaeef10bd6" (UID: "95920882-93c3-4a03-bfc1-cfeaeef10bd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.221326 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-config-data" (OuterVolumeSpecName: "config-data") pod "95920882-93c3-4a03-bfc1-cfeaeef10bd6" (UID: "95920882-93c3-4a03-bfc1-cfeaeef10bd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.235679 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.312067 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.312105 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95920882-93c3-4a03-bfc1-cfeaeef10bd6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.312137 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.312147 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.312158 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggxjm\" (UniqueName: \"kubernetes.io/projected/95920882-93c3-4a03-bfc1-cfeaeef10bd6-kube-api-access-ggxjm\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.312166 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.312177 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95920882-93c3-4a03-bfc1-cfeaeef10bd6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.352567 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.418393 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.444251 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d503f433-f37b-45ed-a7e5-fc845b97e985" path="/var/lib/kubelet/pods/d503f433-f37b-45ed-a7e5-fc845b97e985/volumes" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.615728 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.741757 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b66cd9fcf-c94kp"] Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.742311 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-nb\") pod \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.742437 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqzsn\" (UniqueName: \"kubernetes.io/projected/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-kube-api-access-sqzsn\") pod \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.742456 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-sb\") pod \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.742484 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-swift-storage-0\") pod \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.742535 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-config\") pod \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.742552 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-svc\") pod \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\" (UID: \"056322cc-65a1-41ad-84a8-a01c8b7e2ac3\") " Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.766390 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-kube-api-access-sqzsn" (OuterVolumeSpecName: "kube-api-access-sqzsn") pod "056322cc-65a1-41ad-84a8-a01c8b7e2ac3" (UID: "056322cc-65a1-41ad-84a8-a01c8b7e2ac3"). InnerVolumeSpecName "kube-api-access-sqzsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.787257 4793 generic.go:334] "Generic (PLEG): container finished" podID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerID="8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d" exitCode=0 Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.787311 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" event={"ID":"b318d131-c8b9-41a5-a500-f8a9405e0074","Type":"ContainerDied","Data":"8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d"} Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.787337 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" event={"ID":"b318d131-c8b9-41a5-a500-f8a9405e0074","Type":"ContainerStarted","Data":"de747f3964ebf14001721dc6443bbc5eded45594ed34eae45ced08a6517ebd85"} Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.789165 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "056322cc-65a1-41ad-84a8-a01c8b7e2ac3" (UID: "056322cc-65a1-41ad-84a8-a01c8b7e2ac3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.797422 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" event={"ID":"e6a668ba-7440-4eb2-ba94-29c9f1916625","Type":"ContainerDied","Data":"c2a515cc3d3f339a5e32e30b902a887bb34f4e6875238ac55c8088138646231b"} Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.797466 4793 scope.go:117] "RemoveContainer" containerID="15d506971acedaa7bb99095c847196af33271345f5a9e05340688d33bdaff291" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.797569 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-tnbbm" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.797859 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "056322cc-65a1-41ad-84a8-a01c8b7e2ac3" (UID: "056322cc-65a1-41ad-84a8-a01c8b7e2ac3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.798310 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "056322cc-65a1-41ad-84a8-a01c8b7e2ac3" (UID: "056322cc-65a1-41ad-84a8-a01c8b7e2ac3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.838108 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" event={"ID":"056322cc-65a1-41ad-84a8-a01c8b7e2ac3","Type":"ContainerDied","Data":"2b23e0d92930d14490b62a976bcd1c55e52803bb1166bdf22fd572ab7384aac5"} Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.838172 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-kbrx4" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.838543 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.858291 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "056322cc-65a1-41ad-84a8-a01c8b7e2ac3" (UID: "056322cc-65a1-41ad-84a8-a01c8b7e2ac3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.861096 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqzsn\" (UniqueName: \"kubernetes.io/projected/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-kube-api-access-sqzsn\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.861128 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.861138 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.861147 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.895145 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-config" (OuterVolumeSpecName: "config") pod "056322cc-65a1-41ad-84a8-a01c8b7e2ac3" (UID: "056322cc-65a1-41ad-84a8-a01c8b7e2ac3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.968672 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:00 crc kubenswrapper[4793]: I0130 14:06:00.968927 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/056322cc-65a1-41ad-84a8-a01c8b7e2ac3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.030503 4793 scope.go:117] "RemoveContainer" containerID="baf53c748c6a6992b01298fe55003ed2cd87ea55e116f674ef10391d191eb4a2" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.040087 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tnbbm"] Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.068440 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tnbbm"] Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.097199 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.153393 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.280973 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:01 crc kubenswrapper[4793]: E0130 14:06:01.281398 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a668ba-7440-4eb2-ba94-29c9f1916625" containerName="init" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.281411 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a668ba-7440-4eb2-ba94-29c9f1916625" containerName="init" Jan 30 14:06:01 crc kubenswrapper[4793]: E0130 14:06:01.281421 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056322cc-65a1-41ad-84a8-a01c8b7e2ac3" containerName="init" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.281427 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="056322cc-65a1-41ad-84a8-a01c8b7e2ac3" containerName="init" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.281623 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a668ba-7440-4eb2-ba94-29c9f1916625" containerName="init" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.281642 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="056322cc-65a1-41ad-84a8-a01c8b7e2ac3" containerName="init" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.282990 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.292580 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.300873 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.334435 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.386889 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-logs\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.386942 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.386960 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.387013 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.387041 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9szd\" (UniqueName: \"kubernetes.io/projected/95da467e-d092-4859-b82e-669b122856c9-kube-api-access-v9szd\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.387076 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.387094 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.390332 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-kbrx4"] Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.408102 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-kbrx4"] Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.488904 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-logs\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.489601 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-logs\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.490472 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.490501 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.490644 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.490680 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9szd\" (UniqueName: \"kubernetes.io/projected/95da467e-d092-4859-b82e-669b122856c9-kube-api-access-v9szd\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.490699 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.490748 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.491085 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.492099 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.498687 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.500860 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.519255 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9szd\" (UniqueName: \"kubernetes.io/projected/95da467e-d092-4859-b82e-669b122856c9-kube-api-access-v9szd\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.521333 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.533106 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.641333 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.866776 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerStarted","Data":"abb829370f6052fa5b93898ca6acb8788a4543ea051b65ba7f0f97b896bb3dd6"} Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.872850 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" event={"ID":"b318d131-c8b9-41a5-a500-f8a9405e0074","Type":"ContainerStarted","Data":"43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630"} Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.876561 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47d5b1a9-edbe-4b43-8395-cb1fa337ad28","Type":"ContainerStarted","Data":"75d5f63d74ded6af6fe90efd5846a2c83282bfbfb878df2f8d8cd8df32ecf051"} Jan 30 14:06:01 crc kubenswrapper[4793]: I0130 14:06:01.895606 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" podStartSLOduration=4.895584603 podStartE2EDuration="4.895584603s" podCreationTimestamp="2026-01-30 14:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:06:01.891163625 +0000 UTC m=+1372.592512116" watchObservedRunningTime="2026-01-30 14:06:01.895584603 +0000 UTC m=+1372.596933094" Jan 30 14:06:02 crc kubenswrapper[4793]: I0130 14:06:02.443431 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="056322cc-65a1-41ad-84a8-a01c8b7e2ac3" path="/var/lib/kubelet/pods/056322cc-65a1-41ad-84a8-a01c8b7e2ac3/volumes" Jan 30 14:06:02 crc kubenswrapper[4793]: I0130 14:06:02.447002 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95920882-93c3-4a03-bfc1-cfeaeef10bd6" path="/var/lib/kubelet/pods/95920882-93c3-4a03-bfc1-cfeaeef10bd6/volumes" Jan 30 14:06:02 crc kubenswrapper[4793]: I0130 14:06:02.447564 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6a668ba-7440-4eb2-ba94-29c9f1916625" path="/var/lib/kubelet/pods/e6a668ba-7440-4eb2-ba94-29c9f1916625/volumes" Jan 30 14:06:02 crc kubenswrapper[4793]: I0130 14:06:02.696330 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:02 crc kubenswrapper[4793]: I0130 14:06:02.917858 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47d5b1a9-edbe-4b43-8395-cb1fa337ad28","Type":"ContainerStarted","Data":"b68b41c83a25ce40914355b04f296d07cb763ba1b3cf6b31c3970b27a2f376fd"} Jan 30 14:06:02 crc kubenswrapper[4793]: I0130 14:06:02.953774 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95da467e-d092-4859-b82e-669b122856c9","Type":"ContainerStarted","Data":"bb1822de99167e67b698d62e79b73155f8af99f3f73a4a9033d2f811e3931452"} Jan 30 14:06:02 crc kubenswrapper[4793]: I0130 14:06:02.954039 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:06:05 crc kubenswrapper[4793]: I0130 14:06:05.011646 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95da467e-d092-4859-b82e-669b122856c9","Type":"ContainerStarted","Data":"40b8e80d53a26f06d0539ee09f487d43f02d75e204ed248460157c9f9bd2932e"} Jan 30 14:06:05 crc kubenswrapper[4793]: I0130 14:06:05.013646 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47d5b1a9-edbe-4b43-8395-cb1fa337ad28","Type":"ContainerStarted","Data":"4eea34353468e7b48cc7a2b7e05df1b19511a82085c8f2adf2ba94e4764bc33e"} Jan 30 14:06:05 crc kubenswrapper[4793]: I0130 14:06:05.044705 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.044686866 podStartE2EDuration="6.044686866s" podCreationTimestamp="2026-01-30 14:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:06:05.037169102 +0000 UTC m=+1375.738517593" watchObservedRunningTime="2026-01-30 14:06:05.044686866 +0000 UTC m=+1375.746035357" Jan 30 14:06:06 crc kubenswrapper[4793]: I0130 14:06:06.028684 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95da467e-d092-4859-b82e-669b122856c9","Type":"ContainerStarted","Data":"7d4ae9a017860f2c49c7a68d93ab79a59e3223d425104405ff48022e02c702d7"} Jan 30 14:06:06 crc kubenswrapper[4793]: I0130 14:06:06.069664 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.069645114 podStartE2EDuration="5.069645114s" podCreationTimestamp="2026-01-30 14:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:06:06.055100118 +0000 UTC m=+1376.756448609" watchObservedRunningTime="2026-01-30 14:06:06.069645114 +0000 UTC m=+1376.770993605" Jan 30 14:06:08 crc kubenswrapper[4793]: I0130 14:06:08.418320 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:06:08 crc kubenswrapper[4793]: I0130 14:06:08.512174 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tp7zf"] Jan 30 14:06:08 crc kubenswrapper[4793]: I0130 14:06:08.512385 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-tp7zf" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerName="dnsmasq-dns" containerID="cri-o://610455f7ee877cbfe48a7dcf3922577b44a3ba262f3673e879a83bee7f9c298d" gracePeriod=10 Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.270024 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.270250 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="95da467e-d092-4859-b82e-669b122856c9" containerName="glance-log" containerID="cri-o://40b8e80d53a26f06d0539ee09f487d43f02d75e204ed248460157c9f9bd2932e" gracePeriod=30 Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.272366 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="95da467e-d092-4859-b82e-669b122856c9" containerName="glance-httpd" containerID="cri-o://7d4ae9a017860f2c49c7a68d93ab79a59e3223d425104405ff48022e02c702d7" gracePeriod=30 Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.413262 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.413832 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerName="glance-log" containerID="cri-o://b68b41c83a25ce40914355b04f296d07cb763ba1b3cf6b31c3970b27a2f376fd" gracePeriod=30 Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.413905 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerName="glance-httpd" containerID="cri-o://4eea34353468e7b48cc7a2b7e05df1b19511a82085c8f2adf2ba94e4764bc33e" gracePeriod=30 Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.433500 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8698dbdc7f-7rwcn"] Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.496678 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b9fc5f8f6-nj7xv"] Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.501221 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.508852 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.523982 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b9fc5f8f6-nj7xv"] Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.609447 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-config-data\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.609899 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-scripts\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.610032 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-combined-ca-bundle\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.610271 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-horizon-secret-key\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.610383 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjs5m\" (UniqueName: \"kubernetes.io/projected/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-kube-api-access-sjs5m\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.610550 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-horizon-tls-certs\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.610709 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-logs\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.712148 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-horizon-tls-certs\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.712251 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-logs\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.712302 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-config-data\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.712360 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-scripts\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.712383 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-combined-ca-bundle\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.712405 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-horizon-secret-key\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.712462 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjs5m\" (UniqueName: \"kubernetes.io/projected/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-kube-api-access-sjs5m\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.713550 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-logs\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.713832 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-scripts\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.714611 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-config-data\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.719546 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-horizon-secret-key\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.719751 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-combined-ca-bundle\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.741191 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-horizon-tls-certs\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.742357 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjs5m\" (UniqueName: \"kubernetes.io/projected/7c37d49c-cbd6-47d6-8f29-51ec6fac2f61-kube-api-access-sjs5m\") pod \"horizon-5b9fc5f8f6-nj7xv\" (UID: \"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61\") " pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:09 crc kubenswrapper[4793]: I0130 14:06:09.831058 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.105726 4793 generic.go:334] "Generic (PLEG): container finished" podID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerID="610455f7ee877cbfe48a7dcf3922577b44a3ba262f3673e879a83bee7f9c298d" exitCode=0 Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.105790 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tp7zf" event={"ID":"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1","Type":"ContainerDied","Data":"610455f7ee877cbfe48a7dcf3922577b44a3ba262f3673e879a83bee7f9c298d"} Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.111006 4793 generic.go:334] "Generic (PLEG): container finished" podID="95da467e-d092-4859-b82e-669b122856c9" containerID="7d4ae9a017860f2c49c7a68d93ab79a59e3223d425104405ff48022e02c702d7" exitCode=0 Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.111031 4793 generic.go:334] "Generic (PLEG): container finished" podID="95da467e-d092-4859-b82e-669b122856c9" containerID="40b8e80d53a26f06d0539ee09f487d43f02d75e204ed248460157c9f9bd2932e" exitCode=143 Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.111084 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95da467e-d092-4859-b82e-669b122856c9","Type":"ContainerDied","Data":"7d4ae9a017860f2c49c7a68d93ab79a59e3223d425104405ff48022e02c702d7"} Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.111130 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95da467e-d092-4859-b82e-669b122856c9","Type":"ContainerDied","Data":"40b8e80d53a26f06d0539ee09f487d43f02d75e204ed248460157c9f9bd2932e"} Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.113600 4793 generic.go:334] "Generic (PLEG): container finished" podID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerID="4eea34353468e7b48cc7a2b7e05df1b19511a82085c8f2adf2ba94e4764bc33e" exitCode=0 Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.113620 4793 generic.go:334] "Generic (PLEG): container finished" podID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerID="b68b41c83a25ce40914355b04f296d07cb763ba1b3cf6b31c3970b27a2f376fd" exitCode=143 Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.113650 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47d5b1a9-edbe-4b43-8395-cb1fa337ad28","Type":"ContainerDied","Data":"4eea34353468e7b48cc7a2b7e05df1b19511a82085c8f2adf2ba94e4764bc33e"} Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.113666 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47d5b1a9-edbe-4b43-8395-cb1fa337ad28","Type":"ContainerDied","Data":"b68b41c83a25ce40914355b04f296d07cb763ba1b3cf6b31c3970b27a2f376fd"} Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.115110 4793 generic.go:334] "Generic (PLEG): container finished" podID="8195589a-9117-4f82-875b-1e0deec11c01" containerID="c0abfc20236991093d7e8e2afcdd95243ff40e4122ba5c47744049c4a654a438" exitCode=0 Jan 30 14:06:10 crc kubenswrapper[4793]: I0130 14:06:10.115134 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p79cl" event={"ID":"8195589a-9117-4f82-875b-1e0deec11c01","Type":"ContainerDied","Data":"c0abfc20236991093d7e8e2afcdd95243ff40e4122ba5c47744049c4a654a438"} Jan 30 14:06:12 crc kubenswrapper[4793]: I0130 14:06:12.413249 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:06:12 crc kubenswrapper[4793]: I0130 14:06:12.413844 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:06:16 crc kubenswrapper[4793]: I0130 14:06:16.809202 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-tp7zf" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Jan 30 14:06:18 crc kubenswrapper[4793]: I0130 14:06:18.829362 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="f45b0069-4cb7-4dfd-ac2d-1473cacbde1f" containerName="galera" probeResult="failure" output="command timed out" Jan 30 14:06:18 crc kubenswrapper[4793]: I0130 14:06:18.848462 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f45b0069-4cb7-4dfd-ac2d-1473cacbde1f" containerName="galera" probeResult="failure" output="command timed out" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.456990 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.461863 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.641810 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-dns-svc\") pod \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642237 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74tsm\" (UniqueName: \"kubernetes.io/projected/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-kube-api-access-74tsm\") pod \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642333 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-logs\") pod \"95da467e-d092-4859-b82e-669b122856c9\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642356 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-httpd-run\") pod \"95da467e-d092-4859-b82e-669b122856c9\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642384 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-sb\") pod \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642403 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-config-data\") pod \"95da467e-d092-4859-b82e-669b122856c9\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642461 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-config\") pod \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642484 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-nb\") pod \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\" (UID: \"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642545 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"95da467e-d092-4859-b82e-669b122856c9\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642567 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9szd\" (UniqueName: \"kubernetes.io/projected/95da467e-d092-4859-b82e-669b122856c9-kube-api-access-v9szd\") pod \"95da467e-d092-4859-b82e-669b122856c9\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642599 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-scripts\") pod \"95da467e-d092-4859-b82e-669b122856c9\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.642633 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-combined-ca-bundle\") pod \"95da467e-d092-4859-b82e-669b122856c9\" (UID: \"95da467e-d092-4859-b82e-669b122856c9\") " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.646958 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-logs" (OuterVolumeSpecName: "logs") pod "95da467e-d092-4859-b82e-669b122856c9" (UID: "95da467e-d092-4859-b82e-669b122856c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.647246 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "95da467e-d092-4859-b82e-669b122856c9" (UID: "95da467e-d092-4859-b82e-669b122856c9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.688222 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "95da467e-d092-4859-b82e-669b122856c9" (UID: "95da467e-d092-4859-b82e-669b122856c9"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.688386 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95da467e-d092-4859-b82e-669b122856c9-kube-api-access-v9szd" (OuterVolumeSpecName: "kube-api-access-v9szd") pod "95da467e-d092-4859-b82e-669b122856c9" (UID: "95da467e-d092-4859-b82e-669b122856c9"). InnerVolumeSpecName "kube-api-access-v9szd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.695332 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-scripts" (OuterVolumeSpecName: "scripts") pod "95da467e-d092-4859-b82e-669b122856c9" (UID: "95da467e-d092-4859-b82e-669b122856c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.721337 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-kube-api-access-74tsm" (OuterVolumeSpecName: "kube-api-access-74tsm") pod "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" (UID: "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1"). InnerVolumeSpecName "kube-api-access-74tsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.745774 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.745806 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9szd\" (UniqueName: \"kubernetes.io/projected/95da467e-d092-4859-b82e-669b122856c9-kube-api-access-v9szd\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.745820 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.745831 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74tsm\" (UniqueName: \"kubernetes.io/projected/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-kube-api-access-74tsm\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.745843 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.745853 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95da467e-d092-4859-b82e-669b122856c9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.776658 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.835424 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95da467e-d092-4859-b82e-669b122856c9" (UID: "95da467e-d092-4859-b82e-669b122856c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.846938 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.846965 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.869624 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-config-data" (OuterVolumeSpecName: "config-data") pod "95da467e-d092-4859-b82e-669b122856c9" (UID: "95da467e-d092-4859-b82e-669b122856c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.872499 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" (UID: "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.883346 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-config" (OuterVolumeSpecName: "config") pod "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" (UID: "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.887467 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" (UID: "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.888023 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" (UID: "81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.948846 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.948882 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.948894 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95da467e-d092-4859-b82e-669b122856c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.948905 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:19 crc kubenswrapper[4793]: I0130 14:06:19.948916 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:20 crc kubenswrapper[4793]: E0130 14:06:20.074617 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 14:06:20 crc kubenswrapper[4793]: E0130 14:06:20.074775 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f9h5b6h587h649h698h55hddh65bh578h55chfhf9h66fh85h79h8dhffh585h67ch87h55dh5b9h5d7h65h577h5d5hdh685h669h64ch559h5d9q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wstbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6b66cd9fcf-c94kp_openstack(ecab991a-220f-4b09-a1fa-f43fef3d0be5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:06:20 crc kubenswrapper[4793]: E0130 14:06:20.077821 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.255980 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95da467e-d092-4859-b82e-669b122856c9","Type":"ContainerDied","Data":"bb1822de99167e67b698d62e79b73155f8af99f3f73a4a9033d2f811e3931452"} Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.256033 4793 scope.go:117] "RemoveContainer" containerID="7d4ae9a017860f2c49c7a68d93ab79a59e3223d425104405ff48022e02c702d7" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.256682 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.257896 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tp7zf" event={"ID":"81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1","Type":"ContainerDied","Data":"d3a25e8a3b91c8c4040360de5d0cfe31c348e5b8ddffa9f734cc6f66d6f94231"} Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.259237 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-tp7zf" Jan 30 14:06:20 crc kubenswrapper[4793]: E0130 14:06:20.263135 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.318620 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tp7zf"] Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.324679 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tp7zf"] Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.331916 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.338713 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.349807 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:20 crc kubenswrapper[4793]: E0130 14:06:20.350309 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95da467e-d092-4859-b82e-669b122856c9" containerName="glance-httpd" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.350331 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="95da467e-d092-4859-b82e-669b122856c9" containerName="glance-httpd" Jan 30 14:06:20 crc kubenswrapper[4793]: E0130 14:06:20.350376 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerName="dnsmasq-dns" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.350385 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerName="dnsmasq-dns" Jan 30 14:06:20 crc kubenswrapper[4793]: E0130 14:06:20.350400 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95da467e-d092-4859-b82e-669b122856c9" containerName="glance-log" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.350408 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="95da467e-d092-4859-b82e-669b122856c9" containerName="glance-log" Jan 30 14:06:20 crc kubenswrapper[4793]: E0130 14:06:20.350418 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerName="init" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.350426 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerName="init" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.350637 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="95da467e-d092-4859-b82e-669b122856c9" containerName="glance-log" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.350671 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerName="dnsmasq-dns" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.350687 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="95da467e-d092-4859-b82e-669b122856c9" containerName="glance-httpd" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.351831 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.355213 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.355421 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.363430 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.427365 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" path="/var/lib/kubelet/pods/81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1/volumes" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.427980 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95da467e-d092-4859-b82e-669b122856c9" path="/var/lib/kubelet/pods/95da467e-d092-4859-b82e-669b122856c9/volumes" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.458953 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.459020 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.459061 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-logs\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.459115 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.459160 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.459198 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-scripts\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.459224 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44tdd\" (UniqueName: \"kubernetes.io/projected/afd812b0-55db-4cff-b0cd-4b18afe5a4be-kube-api-access-44tdd\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.459267 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-config-data\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.560723 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-scripts\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.561557 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44tdd\" (UniqueName: \"kubernetes.io/projected/afd812b0-55db-4cff-b0cd-4b18afe5a4be-kube-api-access-44tdd\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.561897 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-config-data\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.561928 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.561987 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.562015 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-logs\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.562089 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.562159 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.562593 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.562799 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-logs\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.563326 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.566017 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-scripts\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.566252 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.566336 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.567854 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-config-data\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.589855 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44tdd\" (UniqueName: \"kubernetes.io/projected/afd812b0-55db-4cff-b0cd-4b18afe5a4be-kube-api-access-44tdd\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.636391 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " pod="openstack/glance-default-external-api-0" Jan 30 14:06:20 crc kubenswrapper[4793]: I0130 14:06:20.723631 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:06:21 crc kubenswrapper[4793]: I0130 14:06:21.810081 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-tp7zf" podUID="81ed15ab-f5a0-4d7e-b528-bc143b9a5ba1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Jan 30 14:06:30 crc kubenswrapper[4793]: I0130 14:06:30.237207 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:30 crc kubenswrapper[4793]: I0130 14:06:30.238136 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:30 crc kubenswrapper[4793]: E0130 14:06:30.521715 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 14:06:30 crc kubenswrapper[4793]: E0130 14:06:30.522267 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59dhb8h655h64bh686h97h5d4h644h68h648h77hf5h57h656h64fh585h59fh77h5fh688h5cch55hc7h5d7h648h699h66ch5f7h66h58fh55h599q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vtnhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-787bd77877-l9df5_openstack(4bd63ed1-4883-41ca-b7bb-f23bb10f5c88): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:06:30 crc kubenswrapper[4793]: E0130 14:06:30.525598 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-787bd77877-l9df5" podUID="4bd63ed1-4883-41ca-b7bb-f23bb10f5c88" Jan 30 14:06:30 crc kubenswrapper[4793]: E0130 14:06:30.987496 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 30 14:06:30 crc kubenswrapper[4793]: E0130 14:06:30.987769 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sr8nv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-gpt4t_openstack(126207f4-9b13-4892-aa15-0616a488af8c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:06:30 crc kubenswrapper[4793]: E0130 14:06:30.989147 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-gpt4t" podUID="126207f4-9b13-4892-aa15-0616a488af8c" Jan 30 14:06:31 crc kubenswrapper[4793]: E0130 14:06:31.029596 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 14:06:31 crc kubenswrapper[4793]: E0130 14:06:31.029810 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n77hd5h5cfh55dh689h654hc4h664h5f5h566h657h576h647hcfh687h96h5fch5dch66hb6h686h59h5cch688h594h654hbbh5dbh57h5f5h66bhfdq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mcgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-8698dbdc7f-7rwcn_openstack(1f30f95a-540c-4e30-acce-229ae81b4215): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:06:31 crc kubenswrapper[4793]: E0130 14:06:31.036284 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-8698dbdc7f-7rwcn" podUID="1f30f95a-540c-4e30-acce-229ae81b4215" Jan 30 14:06:31 crc kubenswrapper[4793]: E0130 14:06:31.356817 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-gpt4t" podUID="126207f4-9b13-4892-aa15-0616a488af8c" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.791426 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.798518 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860544 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-fernet-keys\") pod \"8195589a-9117-4f82-875b-1e0deec11c01\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860597 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860637 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-scripts\") pod \"8195589a-9117-4f82-875b-1e0deec11c01\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860674 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-combined-ca-bundle\") pod \"8195589a-9117-4f82-875b-1e0deec11c01\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860709 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-config-data\") pod \"8195589a-9117-4f82-875b-1e0deec11c01\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860730 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-credential-keys\") pod \"8195589a-9117-4f82-875b-1e0deec11c01\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860755 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt5hf\" (UniqueName: \"kubernetes.io/projected/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-kube-api-access-tt5hf\") pod \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860778 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-scripts\") pod \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860853 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-logs\") pod \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860874 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-config-data\") pod \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860902 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-combined-ca-bundle\") pod \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860940 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5mrl\" (UniqueName: \"kubernetes.io/projected/8195589a-9117-4f82-875b-1e0deec11c01-kube-api-access-t5mrl\") pod \"8195589a-9117-4f82-875b-1e0deec11c01\" (UID: \"8195589a-9117-4f82-875b-1e0deec11c01\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.860972 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-httpd-run\") pod \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\" (UID: \"47d5b1a9-edbe-4b43-8395-cb1fa337ad28\") " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.861552 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "47d5b1a9-edbe-4b43-8395-cb1fa337ad28" (UID: "47d5b1a9-edbe-4b43-8395-cb1fa337ad28"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.861748 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-logs" (OuterVolumeSpecName: "logs") pod "47d5b1a9-edbe-4b43-8395-cb1fa337ad28" (UID: "47d5b1a9-edbe-4b43-8395-cb1fa337ad28"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.864624 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8195589a-9117-4f82-875b-1e0deec11c01" (UID: "8195589a-9117-4f82-875b-1e0deec11c01"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.865332 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-scripts" (OuterVolumeSpecName: "scripts") pod "47d5b1a9-edbe-4b43-8395-cb1fa337ad28" (UID: "47d5b1a9-edbe-4b43-8395-cb1fa337ad28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.865506 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-scripts" (OuterVolumeSpecName: "scripts") pod "8195589a-9117-4f82-875b-1e0deec11c01" (UID: "8195589a-9117-4f82-875b-1e0deec11c01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.868151 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8195589a-9117-4f82-875b-1e0deec11c01-kube-api-access-t5mrl" (OuterVolumeSpecName: "kube-api-access-t5mrl") pod "8195589a-9117-4f82-875b-1e0deec11c01" (UID: "8195589a-9117-4f82-875b-1e0deec11c01"). InnerVolumeSpecName "kube-api-access-t5mrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.879447 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-kube-api-access-tt5hf" (OuterVolumeSpecName: "kube-api-access-tt5hf") pod "47d5b1a9-edbe-4b43-8395-cb1fa337ad28" (UID: "47d5b1a9-edbe-4b43-8395-cb1fa337ad28"). InnerVolumeSpecName "kube-api-access-tt5hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.879611 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8195589a-9117-4f82-875b-1e0deec11c01" (UID: "8195589a-9117-4f82-875b-1e0deec11c01"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.879877 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "47d5b1a9-edbe-4b43-8395-cb1fa337ad28" (UID: "47d5b1a9-edbe-4b43-8395-cb1fa337ad28"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.892830 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8195589a-9117-4f82-875b-1e0deec11c01" (UID: "8195589a-9117-4f82-875b-1e0deec11c01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.915021 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47d5b1a9-edbe-4b43-8395-cb1fa337ad28" (UID: "47d5b1a9-edbe-4b43-8395-cb1fa337ad28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.922182 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-config-data" (OuterVolumeSpecName: "config-data") pod "8195589a-9117-4f82-875b-1e0deec11c01" (UID: "8195589a-9117-4f82-875b-1e0deec11c01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.940344 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-config-data" (OuterVolumeSpecName: "config-data") pod "47d5b1a9-edbe-4b43-8395-cb1fa337ad28" (UID: "47d5b1a9-edbe-4b43-8395-cb1fa337ad28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965159 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5mrl\" (UniqueName: \"kubernetes.io/projected/8195589a-9117-4f82-875b-1e0deec11c01-kube-api-access-t5mrl\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965194 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965221 4793 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965265 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965278 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965288 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965298 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965308 4793 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8195589a-9117-4f82-875b-1e0deec11c01-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965319 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt5hf\" (UniqueName: \"kubernetes.io/projected/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-kube-api-access-tt5hf\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965331 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965341 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965352 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:40 crc kubenswrapper[4793]: I0130 14:06:40.965363 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d5b1a9-edbe-4b43-8395-cb1fa337ad28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.000926 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.066488 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.440364 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p79cl" event={"ID":"8195589a-9117-4f82-875b-1e0deec11c01","Type":"ContainerDied","Data":"0235cbe667410a12fd0f43900b65c18ce6c6b1f1487e76a077fc7aad8e3b66de"} Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.440630 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0235cbe667410a12fd0f43900b65c18ce6c6b1f1487e76a077fc7aad8e3b66de" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.440376 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p79cl" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.441908 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"47d5b1a9-edbe-4b43-8395-cb1fa337ad28","Type":"ContainerDied","Data":"75d5f63d74ded6af6fe90efd5846a2c83282bfbfb878df2f8d8cd8df32ecf051"} Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.441969 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.485242 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.499825 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.542641 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:06:41 crc kubenswrapper[4793]: E0130 14:06:41.543112 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8195589a-9117-4f82-875b-1e0deec11c01" containerName="keystone-bootstrap" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.543129 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8195589a-9117-4f82-875b-1e0deec11c01" containerName="keystone-bootstrap" Jan 30 14:06:41 crc kubenswrapper[4793]: E0130 14:06:41.543144 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerName="glance-httpd" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.543167 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerName="glance-httpd" Jan 30 14:06:41 crc kubenswrapper[4793]: E0130 14:06:41.543204 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerName="glance-log" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.543210 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerName="glance-log" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.543355 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerName="glance-httpd" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.543370 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8195589a-9117-4f82-875b-1e0deec11c01" containerName="keystone-bootstrap" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.543384 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" containerName="glance-log" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.545686 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.552387 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.553289 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.640991 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.694395 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-logs\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.694496 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.694565 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhczv\" (UniqueName: \"kubernetes.io/projected/5559c03d-3177-4b79-9d5b-4272abb3332c-kube-api-access-mhczv\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.694591 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.694612 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.694630 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.694647 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.694679 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.796787 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.797619 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhczv\" (UniqueName: \"kubernetes.io/projected/5559c03d-3177-4b79-9d5b-4272abb3332c-kube-api-access-mhczv\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.797753 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.797864 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.797979 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.798108 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.798254 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.798412 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-logs\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.798657 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.798766 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.799125 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-logs\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.804222 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.808486 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.810181 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.820410 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhczv\" (UniqueName: \"kubernetes.io/projected/5559c03d-3177-4b79-9d5b-4272abb3332c-kube-api-access-mhczv\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.824520 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.829593 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.903040 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-p79cl"] Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.911669 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-p79cl"] Jan 30 14:06:41 crc kubenswrapper[4793]: I0130 14:06:41.949233 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.017385 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-k4pgl"] Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.018487 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.021194 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.021413 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.021462 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.023074 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nv6pf" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.024946 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.039008 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k4pgl"] Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.205502 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-combined-ca-bundle\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.206318 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-669j6\" (UniqueName: \"kubernetes.io/projected/b8ea0161-c696-4578-a6f7-285a4253dc0f-kube-api-access-669j6\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.206363 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-credential-keys\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.206392 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-fernet-keys\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.206411 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-config-data\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.206456 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-scripts\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: E0130 14:06:42.219708 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 30 14:06:42 crc kubenswrapper[4793]: E0130 14:06:42.219862 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5ffh654hddhf7h5f8h678h689h64bh575h584h58ch67bh555h568h65dh5cdh5b9hf4hdh669h59fh8bh67dh568hd4h6ch595hdh548h97h644h68dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sld6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f85d7b0d-5452-4175-842b-7d1505eb82e0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.269064 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.277761 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.307704 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-combined-ca-bundle\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.307829 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-669j6\" (UniqueName: \"kubernetes.io/projected/b8ea0161-c696-4578-a6f7-285a4253dc0f-kube-api-access-669j6\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.307865 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-credential-keys\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.307904 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-fernet-keys\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.307934 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-config-data\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.308008 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-scripts\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.313916 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-scripts\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.314125 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-combined-ca-bundle\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.314527 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-config-data\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.323869 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-credential-keys\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.326668 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-fernet-keys\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.328812 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-669j6\" (UniqueName: \"kubernetes.io/projected/b8ea0161-c696-4578-a6f7-285a4253dc0f-kube-api-access-669j6\") pod \"keystone-bootstrap-k4pgl\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.377481 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.409099 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f30f95a-540c-4e30-acce-229ae81b4215-logs\") pod \"1f30f95a-540c-4e30-acce-229ae81b4215\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.409161 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-config-data\") pod \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410164 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f30f95a-540c-4e30-acce-229ae81b4215-horizon-secret-key\") pod \"1f30f95a-540c-4e30-acce-229ae81b4215\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410219 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-config-data" (OuterVolumeSpecName: "config-data") pod "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88" (UID: "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410259 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-horizon-secret-key\") pod \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410334 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-scripts\") pod \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410403 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-logs\") pod \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410469 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mcgn\" (UniqueName: \"kubernetes.io/projected/1f30f95a-540c-4e30-acce-229ae81b4215-kube-api-access-7mcgn\") pod \"1f30f95a-540c-4e30-acce-229ae81b4215\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410532 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-scripts\") pod \"1f30f95a-540c-4e30-acce-229ae81b4215\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410596 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-config-data\") pod \"1f30f95a-540c-4e30-acce-229ae81b4215\" (UID: \"1f30f95a-540c-4e30-acce-229ae81b4215\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410624 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtnhg\" (UniqueName: \"kubernetes.io/projected/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-kube-api-access-vtnhg\") pod \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\" (UID: \"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88\") " Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.410832 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-scripts" (OuterVolumeSpecName: "scripts") pod "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88" (UID: "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.411302 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.411310 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-logs" (OuterVolumeSpecName: "logs") pod "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88" (UID: "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.411323 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.411661 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-config-data" (OuterVolumeSpecName: "config-data") pod "1f30f95a-540c-4e30-acce-229ae81b4215" (UID: "1f30f95a-540c-4e30-acce-229ae81b4215"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.411899 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-scripts" (OuterVolumeSpecName: "scripts") pod "1f30f95a-540c-4e30-acce-229ae81b4215" (UID: "1f30f95a-540c-4e30-acce-229ae81b4215"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.412237 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f30f95a-540c-4e30-acce-229ae81b4215-logs" (OuterVolumeSpecName: "logs") pod "1f30f95a-540c-4e30-acce-229ae81b4215" (UID: "1f30f95a-540c-4e30-acce-229ae81b4215"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.413418 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88" (UID: "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.413727 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.413774 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.414485 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47d5b1a9-edbe-4b43-8395-cb1fa337ad28" path="/var/lib/kubelet/pods/47d5b1a9-edbe-4b43-8395-cb1fa337ad28/volumes" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.415168 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8195589a-9117-4f82-875b-1e0deec11c01" path="/var/lib/kubelet/pods/8195589a-9117-4f82-875b-1e0deec11c01/volumes" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.416485 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f30f95a-540c-4e30-acce-229ae81b4215-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1f30f95a-540c-4e30-acce-229ae81b4215" (UID: "1f30f95a-540c-4e30-acce-229ae81b4215"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.416519 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f30f95a-540c-4e30-acce-229ae81b4215-kube-api-access-7mcgn" (OuterVolumeSpecName: "kube-api-access-7mcgn") pod "1f30f95a-540c-4e30-acce-229ae81b4215" (UID: "1f30f95a-540c-4e30-acce-229ae81b4215"). InnerVolumeSpecName "kube-api-access-7mcgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.419347 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-kube-api-access-vtnhg" (OuterVolumeSpecName: "kube-api-access-vtnhg") pod "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88" (UID: "4bd63ed1-4883-41ca-b7bb-f23bb10f5c88"). InnerVolumeSpecName "kube-api-access-vtnhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.449548 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-787bd77877-l9df5" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.453224 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8698dbdc7f-7rwcn" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.470347 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-787bd77877-l9df5" event={"ID":"4bd63ed1-4883-41ca-b7bb-f23bb10f5c88","Type":"ContainerDied","Data":"0b3a3424f23b7d6c10b04af0639314688a591e4cf45a995b12aa2a751c3d037b"} Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.470405 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.470419 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8698dbdc7f-7rwcn" event={"ID":"1f30f95a-540c-4e30-acce-229ae81b4215","Type":"ContainerDied","Data":"195ee6e5e0794333cda4ea233faeb9fe7d4329bd8a1e2d492ad5c4a6790f9c89"} Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.470868 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f37b4adcd989135b3a0199183c5b09641f48fc83f250e8154636cac5c1ad21e6"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.470929 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://f37b4adcd989135b3a0199183c5b09641f48fc83f250e8154636cac5c1ad21e6" gracePeriod=600 Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.513305 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mcgn\" (UniqueName: \"kubernetes.io/projected/1f30f95a-540c-4e30-acce-229ae81b4215-kube-api-access-7mcgn\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.513349 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.513363 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f30f95a-540c-4e30-acce-229ae81b4215-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.513373 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtnhg\" (UniqueName: \"kubernetes.io/projected/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-kube-api-access-vtnhg\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.513383 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f30f95a-540c-4e30-acce-229ae81b4215-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.513394 4793 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f30f95a-540c-4e30-acce-229ae81b4215-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.513404 4793 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.513414 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.538746 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8698dbdc7f-7rwcn"] Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.562115 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-8698dbdc7f-7rwcn"] Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.579834 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-787bd77877-l9df5"] Jan 30 14:06:42 crc kubenswrapper[4793]: I0130 14:06:42.588919 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-787bd77877-l9df5"] Jan 30 14:06:43 crc kubenswrapper[4793]: I0130 14:06:43.463132 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="f37b4adcd989135b3a0199183c5b09641f48fc83f250e8154636cac5c1ad21e6" exitCode=0 Jan 30 14:06:43 crc kubenswrapper[4793]: I0130 14:06:43.463178 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"f37b4adcd989135b3a0199183c5b09641f48fc83f250e8154636cac5c1ad21e6"} Jan 30 14:06:44 crc kubenswrapper[4793]: I0130 14:06:44.425278 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f30f95a-540c-4e30-acce-229ae81b4215" path="/var/lib/kubelet/pods/1f30f95a-540c-4e30-acce-229ae81b4215/volumes" Jan 30 14:06:44 crc kubenswrapper[4793]: I0130 14:06:44.425993 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bd63ed1-4883-41ca-b7bb-f23bb10f5c88" path="/var/lib/kubelet/pods/4bd63ed1-4883-41ca-b7bb-f23bb10f5c88/volumes" Jan 30 14:06:45 crc kubenswrapper[4793]: I0130 14:06:45.526328 4793 scope.go:117] "RemoveContainer" containerID="40b8e80d53a26f06d0539ee09f487d43f02d75e204ed248460157c9f9bd2932e" Jan 30 14:06:45 crc kubenswrapper[4793]: E0130 14:06:45.703787 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 30 14:06:45 crc kubenswrapper[4793]: E0130 14:06:45.704115 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkv5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-4rknj_openstack(f55384b1-b1fd-43eb-8c8d-73398a8f2ecd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:06:45 crc kubenswrapper[4793]: E0130 14:06:45.705712 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-4rknj" podUID="f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" Jan 30 14:06:45 crc kubenswrapper[4793]: I0130 14:06:45.736343 4793 scope.go:117] "RemoveContainer" containerID="610455f7ee877cbfe48a7dcf3922577b44a3ba262f3673e879a83bee7f9c298d" Jan 30 14:06:45 crc kubenswrapper[4793]: I0130 14:06:45.806356 4793 scope.go:117] "RemoveContainer" containerID="d2be4624f88c54b308ce347e2279d0b4015189b7a8bfe3be6bc12fc678ca01b1" Jan 30 14:06:45 crc kubenswrapper[4793]: I0130 14:06:45.952363 4793 scope.go:117] "RemoveContainer" containerID="4eea34353468e7b48cc7a2b7e05df1b19511a82085c8f2adf2ba94e4764bc33e" Jan 30 14:06:45 crc kubenswrapper[4793]: I0130 14:06:45.979540 4793 scope.go:117] "RemoveContainer" containerID="b68b41c83a25ce40914355b04f296d07cb763ba1b3cf6b31c3970b27a2f376fd" Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.044354 4793 scope.go:117] "RemoveContainer" containerID="2d2487d42ac1676516749d1fe7d34e7f815543009b077aded1798d3fcce33e28" Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.087569 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b9fc5f8f6-nj7xv"] Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.157490 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k4pgl"] Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.265204 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:06:46 crc kubenswrapper[4793]: W0130 14:06:46.302848 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafd812b0_55db_4cff_b0cd_4b18afe5a4be.slice/crio-2863a64e0737f90ead25e88cb3e95128501f7112f292e0e206879eebe7f45380 WatchSource:0}: Error finding container 2863a64e0737f90ead25e88cb3e95128501f7112f292e0e206879eebe7f45380: Status 404 returned error can't find the container with id 2863a64e0737f90ead25e88cb3e95128501f7112f292e0e206879eebe7f45380 Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.503599 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afd812b0-55db-4cff-b0cd-4b18afe5a4be","Type":"ContainerStarted","Data":"2863a64e0737f90ead25e88cb3e95128501f7112f292e0e206879eebe7f45380"} Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.507978 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70"} Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.512102 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kkrt6" event={"ID":"644bf4c3-aaaf-45fa-9692-73406a657226","Type":"ContainerStarted","Data":"32ceb7dc9fa876395c4ca9e0e8f70660c79f4304088a586ce49eb1e832993592"} Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.524576 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerStarted","Data":"448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c"} Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.555614 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k4pgl" event={"ID":"b8ea0161-c696-4578-a6f7-285a4253dc0f","Type":"ContainerStarted","Data":"bff2e9040ab8d382d57ee633ed0d4b720e96e3be65ded6621d8b7a51d1e715d7"} Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.555663 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k4pgl" event={"ID":"b8ea0161-c696-4578-a6f7-285a4253dc0f","Type":"ContainerStarted","Data":"0b200ff63984e55abb5a41c94824217395ef35be23e2a95f9d4f2e58ad8bd186"} Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.567128 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gpt4t" event={"ID":"126207f4-9b13-4892-aa15-0616a488af8c","Type":"ContainerStarted","Data":"f6239492972507362decef8f67d6e0f6bc2cfcc0fcc4cf32f831f0f6c07c0017"} Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.575914 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-kkrt6" podStartSLOduration=8.656068344 podStartE2EDuration="51.575896914s" podCreationTimestamp="2026-01-30 14:05:55 +0000 UTC" firstStartedPulling="2026-01-30 14:05:57.783712077 +0000 UTC m=+1368.485060568" lastFinishedPulling="2026-01-30 14:06:40.703540637 +0000 UTC m=+1411.404889138" observedRunningTime="2026-01-30 14:06:46.543463628 +0000 UTC m=+1417.244812119" watchObservedRunningTime="2026-01-30 14:06:46.575896914 +0000 UTC m=+1417.277245405" Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.598946 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-k4pgl" podStartSLOduration=5.598924448 podStartE2EDuration="5.598924448s" podCreationTimestamp="2026-01-30 14:06:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:06:46.576835886 +0000 UTC m=+1417.278184387" watchObservedRunningTime="2026-01-30 14:06:46.598924448 +0000 UTC m=+1417.300272939" Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.613762 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-gpt4t" podStartSLOduration=2.9693119169999997 podStartE2EDuration="51.613743342s" podCreationTimestamp="2026-01-30 14:05:55 +0000 UTC" firstStartedPulling="2026-01-30 14:05:57.420993644 +0000 UTC m=+1368.122342135" lastFinishedPulling="2026-01-30 14:06:46.065425069 +0000 UTC m=+1416.766773560" observedRunningTime="2026-01-30 14:06:46.601072151 +0000 UTC m=+1417.302420662" watchObservedRunningTime="2026-01-30 14:06:46.613743342 +0000 UTC m=+1417.315091833" Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.638549 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9fc5f8f6-nj7xv" event={"ID":"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61","Type":"ContainerStarted","Data":"17ee0f9e22a0cd0fff96008213438a2b5b0d6d932c5a2867f0d0bea08e359ce1"} Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.638585 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9fc5f8f6-nj7xv" event={"ID":"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61","Type":"ContainerStarted","Data":"871fa7f802447852caa160c3d80754a40a8cf65dbdd07bec10a4f92b76ebe1b3"} Jan 30 14:06:46 crc kubenswrapper[4793]: E0130 14:06:46.644088 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-4rknj" podUID="f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" Jan 30 14:06:46 crc kubenswrapper[4793]: I0130 14:06:46.991664 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:06:47 crc kubenswrapper[4793]: I0130 14:06:47.650643 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5559c03d-3177-4b79-9d5b-4272abb3332c","Type":"ContainerStarted","Data":"70a9907e2896545270e49ea508b4c54cd74205507f20d607e118c4c1d4eb4471"} Jan 30 14:06:47 crc kubenswrapper[4793]: I0130 14:06:47.653558 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afd812b0-55db-4cff-b0cd-4b18afe5a4be","Type":"ContainerStarted","Data":"d6909ec1b1d6acd6ea51f39341116d0dc581b2cb648e5824a50f0830c242d28c"} Jan 30 14:06:47 crc kubenswrapper[4793]: I0130 14:06:47.656472 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9fc5f8f6-nj7xv" event={"ID":"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61","Type":"ContainerStarted","Data":"f596f8243d020ebc541370451531edeb9f8ca985e2b5b436a6b072092db3b9f8"} Jan 30 14:06:47 crc kubenswrapper[4793]: I0130 14:06:47.659401 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerStarted","Data":"dff5cd3a5cfaef3ae4c87e55c3563d4578820a2c23ec2494ebf248940d3816d8"} Jan 30 14:06:47 crc kubenswrapper[4793]: I0130 14:06:47.725729 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podStartSLOduration=38.725708532 podStartE2EDuration="38.725708532s" podCreationTimestamp="2026-01-30 14:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:06:47.691218636 +0000 UTC m=+1418.392567127" watchObservedRunningTime="2026-01-30 14:06:47.725708532 +0000 UTC m=+1418.427057023" Jan 30 14:06:47 crc kubenswrapper[4793]: I0130 14:06:47.727019 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6b66cd9fcf-c94kp" podStartSLOduration=3.427715209 podStartE2EDuration="48.727012814s" podCreationTimestamp="2026-01-30 14:05:59 +0000 UTC" firstStartedPulling="2026-01-30 14:06:00.872068321 +0000 UTC m=+1371.573416812" lastFinishedPulling="2026-01-30 14:06:46.171365926 +0000 UTC m=+1416.872714417" observedRunningTime="2026-01-30 14:06:47.723380895 +0000 UTC m=+1418.424729396" watchObservedRunningTime="2026-01-30 14:06:47.727012814 +0000 UTC m=+1418.428361305" Jan 30 14:06:48 crc kubenswrapper[4793]: I0130 14:06:48.671331 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5559c03d-3177-4b79-9d5b-4272abb3332c","Type":"ContainerStarted","Data":"dcaeea7ba1cea9514200e8739efe0c1afeee2c3dce2b9b6f14b9679193172dd8"} Jan 30 14:06:48 crc kubenswrapper[4793]: I0130 14:06:48.672562 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85d7b0d-5452-4175-842b-7d1505eb82e0","Type":"ContainerStarted","Data":"b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433"} Jan 30 14:06:49 crc kubenswrapper[4793]: I0130 14:06:49.608693 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:06:49 crc kubenswrapper[4793]: I0130 14:06:49.608938 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:06:49 crc kubenswrapper[4793]: I0130 14:06:49.683016 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afd812b0-55db-4cff-b0cd-4b18afe5a4be","Type":"ContainerStarted","Data":"7fcd99ccac2b000f72be7038dcce1804ca999ec354f3fa50a7ce90a221f56951"} Jan 30 14:06:49 crc kubenswrapper[4793]: I0130 14:06:49.710984 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=29.710965853 podStartE2EDuration="29.710965853s" podCreationTimestamp="2026-01-30 14:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:06:49.700796483 +0000 UTC m=+1420.402144984" watchObservedRunningTime="2026-01-30 14:06:49.710965853 +0000 UTC m=+1420.412314334" Jan 30 14:06:49 crc kubenswrapper[4793]: I0130 14:06:49.831482 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:49 crc kubenswrapper[4793]: I0130 14:06:49.831714 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:06:50 crc kubenswrapper[4793]: I0130 14:06:50.695907 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5559c03d-3177-4b79-9d5b-4272abb3332c","Type":"ContainerStarted","Data":"031f50784319cac124ddf65fb3b891ec178d8cabb6114ad6fed4b24cfd5aa170"} Jan 30 14:06:50 crc kubenswrapper[4793]: I0130 14:06:50.725313 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 14:06:50 crc kubenswrapper[4793]: I0130 14:06:50.725371 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 14:06:50 crc kubenswrapper[4793]: I0130 14:06:50.725384 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 14:06:50 crc kubenswrapper[4793]: I0130 14:06:50.725518 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 14:06:50 crc kubenswrapper[4793]: I0130 14:06:50.912234 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 14:06:50 crc kubenswrapper[4793]: I0130 14:06:50.923137 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 14:06:51 crc kubenswrapper[4793]: I0130 14:06:51.746730 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.746706636999999 podStartE2EDuration="10.746706637s" podCreationTimestamp="2026-01-30 14:06:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:06:51.739668237 +0000 UTC m=+1422.441016738" watchObservedRunningTime="2026-01-30 14:06:51.746706637 +0000 UTC m=+1422.448055118" Jan 30 14:06:51 crc kubenswrapper[4793]: I0130 14:06:51.950469 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:51 crc kubenswrapper[4793]: I0130 14:06:51.950516 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:51 crc kubenswrapper[4793]: I0130 14:06:51.976223 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:51 crc kubenswrapper[4793]: I0130 14:06:51.989439 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:52 crc kubenswrapper[4793]: I0130 14:06:52.727690 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:52 crc kubenswrapper[4793]: I0130 14:06:52.728088 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 14:06:59 crc kubenswrapper[4793]: I0130 14:06:59.610923 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.146:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8080: connect: connection refused" Jan 30 14:06:59 crc kubenswrapper[4793]: I0130 14:06:59.834304 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 14:07:01 crc kubenswrapper[4793]: I0130 14:07:01.813389 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85d7b0d-5452-4175-842b-7d1505eb82e0","Type":"ContainerStarted","Data":"1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b"} Jan 30 14:07:01 crc kubenswrapper[4793]: I0130 14:07:01.814652 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4rknj" event={"ID":"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd","Type":"ContainerStarted","Data":"ae10414b3d00dc4ceb2bc58d35069ffd261cdc4f3583eb5ebdf5decfcf70c2e6"} Jan 30 14:07:01 crc kubenswrapper[4793]: I0130 14:07:01.837486 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-4rknj" podStartSLOduration=3.996604704 podStartE2EDuration="1m6.837466123s" podCreationTimestamp="2026-01-30 14:05:55 +0000 UTC" firstStartedPulling="2026-01-30 14:05:57.54971981 +0000 UTC m=+1368.251068301" lastFinishedPulling="2026-01-30 14:07:00.390581229 +0000 UTC m=+1431.091929720" observedRunningTime="2026-01-30 14:07:01.83073473 +0000 UTC m=+1432.532083231" watchObservedRunningTime="2026-01-30 14:07:01.837466123 +0000 UTC m=+1432.538814614" Jan 30 14:07:04 crc kubenswrapper[4793]: I0130 14:07:04.678499 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 14:07:04 crc kubenswrapper[4793]: I0130 14:07:04.686495 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 14:07:04 crc kubenswrapper[4793]: I0130 14:07:04.715961 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 14:07:04 crc kubenswrapper[4793]: I0130 14:07:04.865916 4793 generic.go:334] "Generic (PLEG): container finished" podID="b8ea0161-c696-4578-a6f7-285a4253dc0f" containerID="bff2e9040ab8d382d57ee633ed0d4b720e96e3be65ded6621d8b7a51d1e715d7" exitCode=0 Jan 30 14:07:04 crc kubenswrapper[4793]: I0130 14:07:04.866810 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k4pgl" event={"ID":"b8ea0161-c696-4578-a6f7-285a4253dc0f","Type":"ContainerDied","Data":"bff2e9040ab8d382d57ee633ed0d4b720e96e3be65ded6621d8b7a51d1e715d7"} Jan 30 14:07:05 crc kubenswrapper[4793]: I0130 14:07:05.874903 4793 generic.go:334] "Generic (PLEG): container finished" podID="644bf4c3-aaaf-45fa-9692-73406a657226" containerID="32ceb7dc9fa876395c4ca9e0e8f70660c79f4304088a586ce49eb1e832993592" exitCode=0 Jan 30 14:07:05 crc kubenswrapper[4793]: I0130 14:07:05.874983 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kkrt6" event={"ID":"644bf4c3-aaaf-45fa-9692-73406a657226","Type":"ContainerDied","Data":"32ceb7dc9fa876395c4ca9e0e8f70660c79f4304088a586ce49eb1e832993592"} Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.264883 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.382643 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-fernet-keys\") pod \"b8ea0161-c696-4578-a6f7-285a4253dc0f\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.382759 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-credential-keys\") pod \"b8ea0161-c696-4578-a6f7-285a4253dc0f\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.382814 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-config-data\") pod \"b8ea0161-c696-4578-a6f7-285a4253dc0f\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.382863 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-combined-ca-bundle\") pod \"b8ea0161-c696-4578-a6f7-285a4253dc0f\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.383666 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-669j6\" (UniqueName: \"kubernetes.io/projected/b8ea0161-c696-4578-a6f7-285a4253dc0f-kube-api-access-669j6\") pod \"b8ea0161-c696-4578-a6f7-285a4253dc0f\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.383783 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-scripts\") pod \"b8ea0161-c696-4578-a6f7-285a4253dc0f\" (UID: \"b8ea0161-c696-4578-a6f7-285a4253dc0f\") " Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.399446 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-scripts" (OuterVolumeSpecName: "scripts") pod "b8ea0161-c696-4578-a6f7-285a4253dc0f" (UID: "b8ea0161-c696-4578-a6f7-285a4253dc0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.400940 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b8ea0161-c696-4578-a6f7-285a4253dc0f" (UID: "b8ea0161-c696-4578-a6f7-285a4253dc0f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.407326 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ea0161-c696-4578-a6f7-285a4253dc0f-kube-api-access-669j6" (OuterVolumeSpecName: "kube-api-access-669j6") pod "b8ea0161-c696-4578-a6f7-285a4253dc0f" (UID: "b8ea0161-c696-4578-a6f7-285a4253dc0f"). InnerVolumeSpecName "kube-api-access-669j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.422207 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8ea0161-c696-4578-a6f7-285a4253dc0f" (UID: "b8ea0161-c696-4578-a6f7-285a4253dc0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.422773 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b8ea0161-c696-4578-a6f7-285a4253dc0f" (UID: "b8ea0161-c696-4578-a6f7-285a4253dc0f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.448531 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-config-data" (OuterVolumeSpecName: "config-data") pod "b8ea0161-c696-4578-a6f7-285a4253dc0f" (UID: "b8ea0161-c696-4578-a6f7-285a4253dc0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.486645 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.486687 4793 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.486703 4793 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.486719 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.486733 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ea0161-c696-4578-a6f7-285a4253dc0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.486746 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-669j6\" (UniqueName: \"kubernetes.io/projected/b8ea0161-c696-4578-a6f7-285a4253dc0f-kube-api-access-669j6\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.885338 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k4pgl" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.885336 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k4pgl" event={"ID":"b8ea0161-c696-4578-a6f7-285a4253dc0f","Type":"ContainerDied","Data":"0b200ff63984e55abb5a41c94824217395ef35be23e2a95f9d4f2e58ad8bd186"} Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.885471 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b200ff63984e55abb5a41c94824217395ef35be23e2a95f9d4f2e58ad8bd186" Jan 30 14:07:06 crc kubenswrapper[4793]: I0130 14:07:06.927939 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.036672 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d689db86f-zslsz"] Jan 30 14:07:07 crc kubenswrapper[4793]: E0130 14:07:07.037130 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ea0161-c696-4578-a6f7-285a4253dc0f" containerName="keystone-bootstrap" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.037146 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ea0161-c696-4578-a6f7-285a4253dc0f" containerName="keystone-bootstrap" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.037288 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ea0161-c696-4578-a6f7-285a4253dc0f" containerName="keystone-bootstrap" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.037791 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.046860 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nv6pf" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.047058 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.047145 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.047228 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.047308 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.047387 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.068983 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d689db86f-zslsz"] Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.104954 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-combined-ca-bundle\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.105067 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-internal-tls-certs\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.105139 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-fernet-keys\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.105215 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b8fl\" (UniqueName: \"kubernetes.io/projected/0ed57c3d-4992-4cfa-8655-1587b5897df6-kube-api-access-5b8fl\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.105244 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-scripts\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.105280 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-config-data\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.105340 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-public-tls-certs\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.105366 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-credential-keys\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.209285 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b8fl\" (UniqueName: \"kubernetes.io/projected/0ed57c3d-4992-4cfa-8655-1587b5897df6-kube-api-access-5b8fl\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.209357 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-scripts\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.209409 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-config-data\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.209449 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-public-tls-certs\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.209468 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-credential-keys\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.209505 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-combined-ca-bundle\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.209535 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-internal-tls-certs\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.209562 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-fernet-keys\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.221276 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-scripts\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.224091 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-config-data\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.224096 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-combined-ca-bundle\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.224896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-public-tls-certs\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.225173 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-fernet-keys\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.227867 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-internal-tls-certs\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.229709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0ed57c3d-4992-4cfa-8655-1587b5897df6-credential-keys\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.253523 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b8fl\" (UniqueName: \"kubernetes.io/projected/0ed57c3d-4992-4cfa-8655-1587b5897df6-kube-api-access-5b8fl\") pod \"keystone-d689db86f-zslsz\" (UID: \"0ed57c3d-4992-4cfa-8655-1587b5897df6\") " pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:07 crc kubenswrapper[4793]: I0130 14:07:07.373968 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:09 crc kubenswrapper[4793]: I0130 14:07:09.608883 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.146:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8080: connect: connection refused" Jan 30 14:07:09 crc kubenswrapper[4793]: I0130 14:07:09.832382 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 14:07:09 crc kubenswrapper[4793]: I0130 14:07:09.913205 4793 generic.go:334] "Generic (PLEG): container finished" podID="126207f4-9b13-4892-aa15-0616a488af8c" containerID="f6239492972507362decef8f67d6e0f6bc2cfcc0fcc4cf32f831f0f6c07c0017" exitCode=0 Jan 30 14:07:09 crc kubenswrapper[4793]: I0130 14:07:09.913270 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gpt4t" event={"ID":"126207f4-9b13-4892-aa15-0616a488af8c","Type":"ContainerDied","Data":"f6239492972507362decef8f67d6e0f6bc2cfcc0fcc4cf32f831f0f6c07c0017"} Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.425969 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kkrt6" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.470025 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-config-data\") pod \"644bf4c3-aaaf-45fa-9692-73406a657226\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.470529 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-combined-ca-bundle\") pod \"644bf4c3-aaaf-45fa-9692-73406a657226\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.470585 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-scripts\") pod \"644bf4c3-aaaf-45fa-9692-73406a657226\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.470626 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644bf4c3-aaaf-45fa-9692-73406a657226-logs\") pod \"644bf4c3-aaaf-45fa-9692-73406a657226\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.470657 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd7h4\" (UniqueName: \"kubernetes.io/projected/644bf4c3-aaaf-45fa-9692-73406a657226-kube-api-access-gd7h4\") pod \"644bf4c3-aaaf-45fa-9692-73406a657226\" (UID: \"644bf4c3-aaaf-45fa-9692-73406a657226\") " Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.471891 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/644bf4c3-aaaf-45fa-9692-73406a657226-logs" (OuterVolumeSpecName: "logs") pod "644bf4c3-aaaf-45fa-9692-73406a657226" (UID: "644bf4c3-aaaf-45fa-9692-73406a657226"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.487810 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-scripts" (OuterVolumeSpecName: "scripts") pod "644bf4c3-aaaf-45fa-9692-73406a657226" (UID: "644bf4c3-aaaf-45fa-9692-73406a657226"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.488963 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644bf4c3-aaaf-45fa-9692-73406a657226-kube-api-access-gd7h4" (OuterVolumeSpecName: "kube-api-access-gd7h4") pod "644bf4c3-aaaf-45fa-9692-73406a657226" (UID: "644bf4c3-aaaf-45fa-9692-73406a657226"). InnerVolumeSpecName "kube-api-access-gd7h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.502639 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "644bf4c3-aaaf-45fa-9692-73406a657226" (UID: "644bf4c3-aaaf-45fa-9692-73406a657226"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.530000 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-config-data" (OuterVolumeSpecName: "config-data") pod "644bf4c3-aaaf-45fa-9692-73406a657226" (UID: "644bf4c3-aaaf-45fa-9692-73406a657226"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.573221 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.573262 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.573275 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644bf4c3-aaaf-45fa-9692-73406a657226-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.573285 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644bf4c3-aaaf-45fa-9692-73406a657226-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.573297 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd7h4\" (UniqueName: \"kubernetes.io/projected/644bf4c3-aaaf-45fa-9692-73406a657226-kube-api-access-gd7h4\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.924241 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kkrt6" Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.924231 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kkrt6" event={"ID":"644bf4c3-aaaf-45fa-9692-73406a657226","Type":"ContainerDied","Data":"b3e8e1acd1cd561d606e595452b7ed4d9ad040eaf08a66d7af08e7308d6d261e"} Jan 30 14:07:10 crc kubenswrapper[4793]: I0130 14:07:10.924371 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3e8e1acd1cd561d606e595452b7ed4d9ad040eaf08a66d7af08e7308d6d261e" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.622408 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-65f95549b8-wtpxl"] Jan 30 14:07:11 crc kubenswrapper[4793]: E0130 14:07:11.623507 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644bf4c3-aaaf-45fa-9692-73406a657226" containerName="placement-db-sync" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.623526 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="644bf4c3-aaaf-45fa-9692-73406a657226" containerName="placement-db-sync" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.623748 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="644bf4c3-aaaf-45fa-9692-73406a657226" containerName="placement-db-sync" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.624590 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.635397 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8krj5" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.635584 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.635742 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.635865 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.636505 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.663248 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-65f95549b8-wtpxl"] Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.700619 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-internal-tls-certs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.700734 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-config-data\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.700779 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52q49\" (UniqueName: \"kubernetes.io/projected/57bfc822-1d30-49bc-a077-686b68e9c1e6-kube-api-access-52q49\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.700803 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-public-tls-certs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.700925 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-scripts\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.700953 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-combined-ca-bundle\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.700985 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57bfc822-1d30-49bc-a077-686b68e9c1e6-logs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.802726 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-scripts\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.803782 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-combined-ca-bundle\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.803881 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57bfc822-1d30-49bc-a077-686b68e9c1e6-logs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.804029 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-internal-tls-certs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.804396 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57bfc822-1d30-49bc-a077-686b68e9c1e6-logs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.804540 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-config-data\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.804696 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52q49\" (UniqueName: \"kubernetes.io/projected/57bfc822-1d30-49bc-a077-686b68e9c1e6-kube-api-access-52q49\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.804755 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-public-tls-certs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.809515 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-scripts\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.809681 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-internal-tls-certs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.822124 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-combined-ca-bundle\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.823794 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-config-data\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.823825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bfc822-1d30-49bc-a077-686b68e9c1e6-public-tls-certs\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.828653 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52q49\" (UniqueName: \"kubernetes.io/projected/57bfc822-1d30-49bc-a077-686b68e9c1e6-kube-api-access-52q49\") pod \"placement-65f95549b8-wtpxl\" (UID: \"57bfc822-1d30-49bc-a077-686b68e9c1e6\") " pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:11 crc kubenswrapper[4793]: I0130 14:07:11.947909 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:13 crc kubenswrapper[4793]: I0130 14:07:13.911825 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:07:13 crc kubenswrapper[4793]: I0130 14:07:13.987165 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gpt4t" event={"ID":"126207f4-9b13-4892-aa15-0616a488af8c","Type":"ContainerDied","Data":"951aaae1b3a62ddc2954a80d0b215b523c731d1bf004dc9a3391b04cbf64290b"} Jan 30 14:07:13 crc kubenswrapper[4793]: I0130 14:07:13.987415 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="951aaae1b3a62ddc2954a80d0b215b523c731d1bf004dc9a3391b04cbf64290b" Jan 30 14:07:13 crc kubenswrapper[4793]: I0130 14:07:13.987614 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gpt4t" Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.057250 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-db-sync-config-data\") pod \"126207f4-9b13-4892-aa15-0616a488af8c\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.057292 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr8nv\" (UniqueName: \"kubernetes.io/projected/126207f4-9b13-4892-aa15-0616a488af8c-kube-api-access-sr8nv\") pod \"126207f4-9b13-4892-aa15-0616a488af8c\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.057483 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-combined-ca-bundle\") pod \"126207f4-9b13-4892-aa15-0616a488af8c\" (UID: \"126207f4-9b13-4892-aa15-0616a488af8c\") " Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.062656 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "126207f4-9b13-4892-aa15-0616a488af8c" (UID: "126207f4-9b13-4892-aa15-0616a488af8c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.086242 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/126207f4-9b13-4892-aa15-0616a488af8c-kube-api-access-sr8nv" (OuterVolumeSpecName: "kube-api-access-sr8nv") pod "126207f4-9b13-4892-aa15-0616a488af8c" (UID: "126207f4-9b13-4892-aa15-0616a488af8c"). InnerVolumeSpecName "kube-api-access-sr8nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.112275 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "126207f4-9b13-4892-aa15-0616a488af8c" (UID: "126207f4-9b13-4892-aa15-0616a488af8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.159884 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.159916 4793 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/126207f4-9b13-4892-aa15-0616a488af8c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.159926 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr8nv\" (UniqueName: \"kubernetes.io/projected/126207f4-9b13-4892-aa15-0616a488af8c-kube-api-access-sr8nv\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:14 crc kubenswrapper[4793]: W0130 14:07:14.353815 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57bfc822_1d30_49bc_a077_686b68e9c1e6.slice/crio-8241d78b09c1b96bd4873ccfc461532494b47b93d9baadfb67b18d99c4c94300 WatchSource:0}: Error finding container 8241d78b09c1b96bd4873ccfc461532494b47b93d9baadfb67b18d99c4c94300: Status 404 returned error can't find the container with id 8241d78b09c1b96bd4873ccfc461532494b47b93d9baadfb67b18d99c4c94300 Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.354959 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-65f95549b8-wtpxl"] Jan 30 14:07:14 crc kubenswrapper[4793]: W0130 14:07:14.367003 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ed57c3d_4992_4cfa_8655_1587b5897df6.slice/crio-9da9be62ee33e3e755638eacd900313f352b976429d68344c5beb0852d0ecc28 WatchSource:0}: Error finding container 9da9be62ee33e3e755638eacd900313f352b976429d68344c5beb0852d0ecc28: Status 404 returned error can't find the container with id 9da9be62ee33e3e755638eacd900313f352b976429d68344c5beb0852d0ecc28 Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.370304 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d689db86f-zslsz"] Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.995384 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d689db86f-zslsz" event={"ID":"0ed57c3d-4992-4cfa-8655-1587b5897df6","Type":"ContainerStarted","Data":"9da9be62ee33e3e755638eacd900313f352b976429d68344c5beb0852d0ecc28"} Jan 30 14:07:14 crc kubenswrapper[4793]: I0130 14:07:14.996714 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65f95549b8-wtpxl" event={"ID":"57bfc822-1d30-49bc-a077-686b68e9c1e6","Type":"ContainerStarted","Data":"8241d78b09c1b96bd4873ccfc461532494b47b93d9baadfb67b18d99c4c94300"} Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.105078 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vxdfs"] Jan 30 14:07:16 crc kubenswrapper[4793]: E0130 14:07:16.105845 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="126207f4-9b13-4892-aa15-0616a488af8c" containerName="barbican-db-sync" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.105862 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="126207f4-9b13-4892-aa15-0616a488af8c" containerName="barbican-db-sync" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.106134 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="126207f4-9b13-4892-aa15-0616a488af8c" containerName="barbican-db-sync" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.107275 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.112722 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vxdfs"] Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.207805 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6dd7f7f8-htnvl"] Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.209609 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.212875 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.213146 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.213279 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2b9wh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.227097 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-d78d76787-7f5jh"] Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.228380 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.235974 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.236925 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.236959 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.236988 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.237009 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.237084 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-745tx\" (UniqueName: \"kubernetes.io/projected/3ed51218-5677-4c7a-aeb6-1ec6c215178a-kube-api-access-745tx\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.237108 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-config\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.242245 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6dd7f7f8-htnvl"] Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.258486 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d78d76787-7f5jh"] Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.276995 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-56c564fddb-9cbqg"] Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.297738 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.300461 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.338976 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af929740-592b-4d7f-9c99-061df6882206-logs\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.339318 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-745tx\" (UniqueName: \"kubernetes.io/projected/3ed51218-5677-4c7a-aeb6-1ec6c215178a-kube-api-access-745tx\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.339428 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-config\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.339569 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/653cedf2-2880-49ff-b177-8974b9f0ecdf-logs\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.339687 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-config-data\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.339833 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.339919 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-config-data-custom\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.340000 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-combined-ca-bundle\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.340089 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.340175 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbs6g\" (UniqueName: \"kubernetes.io/projected/653cedf2-2880-49ff-b177-8974b9f0ecdf-kube-api-access-mbs6g\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.340491 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-config\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.341006 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.342394 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.342432 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8rv7\" (UniqueName: \"kubernetes.io/projected/af929740-592b-4d7f-9c99-061df6882206-kube-api-access-f8rv7\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.342506 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.342577 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.342628 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-combined-ca-bundle\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.342682 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-config-data-custom\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.342753 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-config-data\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.343464 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.343979 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.365175 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56c564fddb-9cbqg"] Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.375120 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-745tx\" (UniqueName: \"kubernetes.io/projected/3ed51218-5677-4c7a-aeb6-1ec6c215178a-kube-api-access-745tx\") pod \"dnsmasq-dns-586bdc5f9-vxdfs\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.447513 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/653cedf2-2880-49ff-b177-8974b9f0ecdf-logs\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.450290 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/653cedf2-2880-49ff-b177-8974b9f0ecdf-logs\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.454550 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-config-data\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.458918 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.459133 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-config-data-custom\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.459265 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-combined-ca-bundle\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.459357 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbs6g\" (UniqueName: \"kubernetes.io/projected/653cedf2-2880-49ff-b177-8974b9f0ecdf-kube-api-access-mbs6g\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.459458 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zv94\" (UniqueName: \"kubernetes.io/projected/a2288b37-d331-4c7e-b95d-13bb4987eb75-kube-api-access-8zv94\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.459570 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8rv7\" (UniqueName: \"kubernetes.io/projected/af929740-592b-4d7f-9c99-061df6882206-kube-api-access-f8rv7\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.459696 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-combined-ca-bundle\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.459787 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-combined-ca-bundle\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.459887 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-config-data-custom\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.460066 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-config-data\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.460186 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af929740-592b-4d7f-9c99-061df6882206-logs\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.460292 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data-custom\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.460547 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2288b37-d331-4c7e-b95d-13bb4987eb75-logs\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.472959 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af929740-592b-4d7f-9c99-061df6882206-logs\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.473676 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.474173 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-config-data-custom\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.483517 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-config-data\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.488008 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-combined-ca-bundle\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.488455 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-config-data-custom\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.489496 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653cedf2-2880-49ff-b177-8974b9f0ecdf-config-data\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.492300 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af929740-592b-4d7f-9c99-061df6882206-combined-ca-bundle\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.501556 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8rv7\" (UniqueName: \"kubernetes.io/projected/af929740-592b-4d7f-9c99-061df6882206-kube-api-access-f8rv7\") pod \"barbican-keystone-listener-6dd7f7f8-htnvl\" (UID: \"af929740-592b-4d7f-9c99-061df6882206\") " pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.502352 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbs6g\" (UniqueName: \"kubernetes.io/projected/653cedf2-2880-49ff-b177-8974b9f0ecdf-kube-api-access-mbs6g\") pod \"barbican-worker-d78d76787-7f5jh\" (UID: \"653cedf2-2880-49ff-b177-8974b9f0ecdf\") " pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.531258 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.556627 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d78d76787-7f5jh" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.562336 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-combined-ca-bundle\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.562444 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data-custom\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.562482 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2288b37-d331-4c7e-b95d-13bb4987eb75-logs\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.562555 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.562653 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zv94\" (UniqueName: \"kubernetes.io/projected/a2288b37-d331-4c7e-b95d-13bb4987eb75-kube-api-access-8zv94\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.566416 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2288b37-d331-4c7e-b95d-13bb4987eb75-logs\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.573340 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data-custom\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.573825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.588765 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-combined-ca-bundle\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.598729 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zv94\" (UniqueName: \"kubernetes.io/projected/a2288b37-d331-4c7e-b95d-13bb4987eb75-kube-api-access-8zv94\") pod \"barbican-api-56c564fddb-9cbqg\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:16 crc kubenswrapper[4793]: E0130 14:07:16.829147 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" Jan 30 14:07:16 crc kubenswrapper[4793]: I0130 14:07:16.887797 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.039317 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85d7b0d-5452-4175-842b-7d1505eb82e0","Type":"ContainerStarted","Data":"923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576"} Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.039459 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="ceilometer-notification-agent" containerID="cri-o://b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433" gracePeriod=30 Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.039673 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.039760 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="proxy-httpd" containerID="cri-o://923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576" gracePeriod=30 Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.039856 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="sg-core" containerID="cri-o://1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b" gracePeriod=30 Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.052287 4793 generic.go:334] "Generic (PLEG): container finished" podID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerID="dff5cd3a5cfaef3ae4c87e55c3563d4578820a2c23ec2494ebf248940d3816d8" exitCode=1 Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.052340 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerDied","Data":"dff5cd3a5cfaef3ae4c87e55c3563d4578820a2c23ec2494ebf248940d3816d8"} Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.052937 4793 scope.go:117] "RemoveContainer" containerID="dff5cd3a5cfaef3ae4c87e55c3563d4578820a2c23ec2494ebf248940d3816d8" Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.069378 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d689db86f-zslsz" event={"ID":"0ed57c3d-4992-4cfa-8655-1587b5897df6","Type":"ContainerStarted","Data":"3f287ac88c96afaae65d350043cfce7455dba0ab3f6639d47bd36b0be7a83d97"} Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.070239 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.073190 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65f95549b8-wtpxl" event={"ID":"57bfc822-1d30-49bc-a077-686b68e9c1e6","Type":"ContainerStarted","Data":"3c4b90e584e671fccfcf606db61676f035f1df60975654e0b13044dc92b71347"} Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.073223 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65f95549b8-wtpxl" event={"ID":"57bfc822-1d30-49bc-a077-686b68e9c1e6","Type":"ContainerStarted","Data":"a86058b646d896fef02aab189293f46ef58626db8f49b0a096ba1a82b0a7e285"} Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.073393 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.073475 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.114135 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-65f95549b8-wtpxl" podStartSLOduration=6.114112861 podStartE2EDuration="6.114112861s" podCreationTimestamp="2026-01-30 14:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:17.108499285 +0000 UTC m=+1447.809847766" watchObservedRunningTime="2026-01-30 14:07:17.114112861 +0000 UTC m=+1447.815461352" Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.155741 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vxdfs"] Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.158777 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-d689db86f-zslsz" podStartSLOduration=10.155107844 podStartE2EDuration="10.155107844s" podCreationTimestamp="2026-01-30 14:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:17.130345004 +0000 UTC m=+1447.831693495" watchObservedRunningTime="2026-01-30 14:07:17.155107844 +0000 UTC m=+1447.856456335" Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.191166 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6dd7f7f8-htnvl"] Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.348646 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d78d76787-7f5jh"] Jan 30 14:07:17 crc kubenswrapper[4793]: I0130 14:07:17.440636 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56c564fddb-9cbqg"] Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.083941 4793 generic.go:334] "Generic (PLEG): container finished" podID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" containerID="86521a408e3d25c11a7337fcc940bc0bc142bbff9725007bee5f593d4d4fea8f" exitCode=0 Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.084497 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" event={"ID":"3ed51218-5677-4c7a-aeb6-1ec6c215178a","Type":"ContainerDied","Data":"86521a408e3d25c11a7337fcc940bc0bc142bbff9725007bee5f593d4d4fea8f"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.084548 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" event={"ID":"3ed51218-5677-4c7a-aeb6-1ec6c215178a","Type":"ContainerStarted","Data":"30fb4318627919dfef7bd7d37dac82088ae21ede274e001c1e66cb82e9d4e95c"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.086162 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d78d76787-7f5jh" event={"ID":"653cedf2-2880-49ff-b177-8974b9f0ecdf","Type":"ContainerStarted","Data":"155e6aa0821f872713dde4309217a3f9f45836ee063b8a383db90e4c1b729351"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.094876 4793 generic.go:334] "Generic (PLEG): container finished" podID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerID="923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576" exitCode=0 Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.095289 4793 generic.go:334] "Generic (PLEG): container finished" podID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerID="1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b" exitCode=2 Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.094936 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85d7b0d-5452-4175-842b-7d1505eb82e0","Type":"ContainerDied","Data":"923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.095405 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85d7b0d-5452-4175-842b-7d1505eb82e0","Type":"ContainerDied","Data":"1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.110940 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerStarted","Data":"1a0edd78ac934a217d77619cfa86e0fdb058839606603994d0152ae52ba43266"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.123101 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" event={"ID":"af929740-592b-4d7f-9c99-061df6882206","Type":"ContainerStarted","Data":"ce9a2834d75e989b4996cc6e5a702194d98c3aaa7e98470bbd0b9d77db207c67"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.127593 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c564fddb-9cbqg" event={"ID":"a2288b37-d331-4c7e-b95d-13bb4987eb75","Type":"ContainerStarted","Data":"782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.127954 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c564fddb-9cbqg" event={"ID":"a2288b37-d331-4c7e-b95d-13bb4987eb75","Type":"ContainerStarted","Data":"f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.128107 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.129215 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c564fddb-9cbqg" event={"ID":"a2288b37-d331-4c7e-b95d-13bb4987eb75","Type":"ContainerStarted","Data":"f97b2202fc16d2a3c18bd1abd87cac5c90aa96890b8132c11e4c4e9fbac70a09"} Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.129370 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:18 crc kubenswrapper[4793]: I0130 14:07:18.183182 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-56c564fddb-9cbqg" podStartSLOduration=2.183158873 podStartE2EDuration="2.183158873s" podCreationTimestamp="2026-01-30 14:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:18.16155732 +0000 UTC m=+1448.862905811" watchObservedRunningTime="2026-01-30 14:07:18.183158873 +0000 UTC m=+1448.884507364" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.138625 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" event={"ID":"3ed51218-5677-4c7a-aeb6-1ec6c215178a","Type":"ContainerStarted","Data":"bb31cb678cc7c2b077ba027ae624b678852c055b20b84f1ef0bb6524f80ba78a"} Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.139196 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.164878 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" podStartSLOduration=3.164859349 podStartE2EDuration="3.164859349s" podCreationTimestamp="2026-01-30 14:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:19.161294174 +0000 UTC m=+1449.862642685" watchObservedRunningTime="2026-01-30 14:07:19.164859349 +0000 UTC m=+1449.866207840" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.165401 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-577797dd7d-dhrt2"] Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.169622 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.174892 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.175124 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.204743 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-577797dd7d-dhrt2"] Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.322966 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-public-tls-certs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.323033 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-config-data-custom\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.323081 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a389d76c-e0de-4b8d-84b2-82aedd050f7f-logs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.323195 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22ngb\" (UniqueName: \"kubernetes.io/projected/a389d76c-e0de-4b8d-84b2-82aedd050f7f-kube-api-access-22ngb\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.323223 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-internal-tls-certs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.323476 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-combined-ca-bundle\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.323595 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-config-data\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.425192 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22ngb\" (UniqueName: \"kubernetes.io/projected/a389d76c-e0de-4b8d-84b2-82aedd050f7f-kube-api-access-22ngb\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.425237 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-internal-tls-certs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.425298 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-combined-ca-bundle\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.425336 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-config-data\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.425399 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-public-tls-certs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.425430 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-config-data-custom\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.425460 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a389d76c-e0de-4b8d-84b2-82aedd050f7f-logs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.425926 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a389d76c-e0de-4b8d-84b2-82aedd050f7f-logs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.430684 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-combined-ca-bundle\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.433024 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-config-data-custom\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.435448 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-config-data\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.444979 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-internal-tls-certs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.448616 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a389d76c-e0de-4b8d-84b2-82aedd050f7f-public-tls-certs\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.448952 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22ngb\" (UniqueName: \"kubernetes.io/projected/a389d76c-e0de-4b8d-84b2-82aedd050f7f-kube-api-access-22ngb\") pod \"barbican-api-577797dd7d-dhrt2\" (UID: \"a389d76c-e0de-4b8d-84b2-82aedd050f7f\") " pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.489225 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.609160 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:07:19 crc kubenswrapper[4793]: I0130 14:07:19.609467 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:07:20 crc kubenswrapper[4793]: I0130 14:07:20.189150 4793 generic.go:334] "Generic (PLEG): container finished" podID="f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" containerID="ae10414b3d00dc4ceb2bc58d35069ffd261cdc4f3583eb5ebdf5decfcf70c2e6" exitCode=0 Jan 30 14:07:20 crc kubenswrapper[4793]: I0130 14:07:20.189430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4rknj" event={"ID":"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd","Type":"ContainerDied","Data":"ae10414b3d00dc4ceb2bc58d35069ffd261cdc4f3583eb5ebdf5decfcf70c2e6"} Jan 30 14:07:20 crc kubenswrapper[4793]: I0130 14:07:20.542570 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-577797dd7d-dhrt2"] Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.227580 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d78d76787-7f5jh" event={"ID":"653cedf2-2880-49ff-b177-8974b9f0ecdf","Type":"ContainerStarted","Data":"643273086e560dec2921a2eb77b5c8efe71ddf9a8874e5a6ad6314a55c5f83f0"} Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.227862 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d78d76787-7f5jh" event={"ID":"653cedf2-2880-49ff-b177-8974b9f0ecdf","Type":"ContainerStarted","Data":"af17714dc1df2fa0408cdff26094746855f718a72e8fe0e97b5bbadd0c07079f"} Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.260659 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-d78d76787-7f5jh" podStartSLOduration=2.650590823 podStartE2EDuration="5.260640749s" podCreationTimestamp="2026-01-30 14:07:16 +0000 UTC" firstStartedPulling="2026-01-30 14:07:17.359226217 +0000 UTC m=+1448.060574708" lastFinishedPulling="2026-01-30 14:07:19.969276143 +0000 UTC m=+1450.670624634" observedRunningTime="2026-01-30 14:07:21.252178874 +0000 UTC m=+1451.953527385" watchObservedRunningTime="2026-01-30 14:07:21.260640749 +0000 UTC m=+1451.961989240" Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.268075 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" event={"ID":"af929740-592b-4d7f-9c99-061df6882206","Type":"ContainerStarted","Data":"276f2bcfcdbb4034f2621c20b42b288cddfcf0dd4a8ef08b418899b719afa302"} Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.268130 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" event={"ID":"af929740-592b-4d7f-9c99-061df6882206","Type":"ContainerStarted","Data":"45f7aaca0a0ff8cfe6b883f5492be3d588aeee2190f8dec902ac7c3ad113e7ff"} Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.274092 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577797dd7d-dhrt2" event={"ID":"a389d76c-e0de-4b8d-84b2-82aedd050f7f","Type":"ContainerStarted","Data":"24f1ed1b5b88989a2fa39b7d9f9de2db99c0b16b303f2f6c39656e86d4d89733"} Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.274140 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577797dd7d-dhrt2" event={"ID":"a389d76c-e0de-4b8d-84b2-82aedd050f7f","Type":"ContainerStarted","Data":"57b2c625731c3f35fca926d279e41c4247e77e8a5eddb40633ef7d98003c5cd1"} Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.310579 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6dd7f7f8-htnvl" podStartSLOduration=2.548281377 podStartE2EDuration="5.310552398s" podCreationTimestamp="2026-01-30 14:07:16 +0000 UTC" firstStartedPulling="2026-01-30 14:07:17.21358464 +0000 UTC m=+1447.914933131" lastFinishedPulling="2026-01-30 14:07:19.975855661 +0000 UTC m=+1450.677204152" observedRunningTime="2026-01-30 14:07:21.289865197 +0000 UTC m=+1451.991213698" watchObservedRunningTime="2026-01-30 14:07:21.310552398 +0000 UTC m=+1452.011900889" Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.834620 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4rknj" Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.981754 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-etc-machine-id\") pod \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.981828 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-scripts\") pod \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.981884 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" (UID: "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.981910 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-config-data\") pod \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.981991 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-db-sync-config-data\") pod \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.982078 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-combined-ca-bundle\") pod \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.982116 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkv5g\" (UniqueName: \"kubernetes.io/projected/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-kube-api-access-gkv5g\") pod \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\" (UID: \"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd\") " Jan 30 14:07:21 crc kubenswrapper[4793]: I0130 14:07:21.983063 4793 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.002085 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-kube-api-access-gkv5g" (OuterVolumeSpecName: "kube-api-access-gkv5g") pod "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" (UID: "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd"). InnerVolumeSpecName "kube-api-access-gkv5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.012231 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-scripts" (OuterVolumeSpecName: "scripts") pod "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" (UID: "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.013199 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" (UID: "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.034960 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" (UID: "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.085615 4793 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.085847 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.085932 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkv5g\" (UniqueName: \"kubernetes.io/projected/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-kube-api-access-gkv5g\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.086026 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.133198 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-config-data" (OuterVolumeSpecName: "config-data") pod "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" (UID: "f55384b1-b1fd-43eb-8c8d-73398a8f2ecd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.188245 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.282243 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4rknj" event={"ID":"f55384b1-b1fd-43eb-8c8d-73398a8f2ecd","Type":"ContainerDied","Data":"6d4763986d1b4a11b99da97ae431575d2b3082d3a2bdcdbedb9c248948af623d"} Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.282279 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d4763986d1b4a11b99da97ae431575d2b3082d3a2bdcdbedb9c248948af623d" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.282332 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4rknj" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.292835 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577797dd7d-dhrt2" event={"ID":"a389d76c-e0de-4b8d-84b2-82aedd050f7f","Type":"ContainerStarted","Data":"cb375cd077935993ece603f76e3e2a78c761c0d3002d3112c9452fbd5054cbcd"} Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.320156 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-577797dd7d-dhrt2" podStartSLOduration=3.32013359 podStartE2EDuration="3.32013359s" podCreationTimestamp="2026-01-30 14:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:22.31564884 +0000 UTC m=+1453.016997331" watchObservedRunningTime="2026-01-30 14:07:22.32013359 +0000 UTC m=+1453.021482081" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.491178 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:22 crc kubenswrapper[4793]: E0130 14:07:22.491521 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" containerName="cinder-db-sync" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.491534 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" containerName="cinder-db-sync" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.491742 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" containerName="cinder-db-sync" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.492627 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.510852 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.511144 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5kb4p" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.511372 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.512216 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.520650 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.611351 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvr52\" (UniqueName: \"kubernetes.io/projected/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-kube-api-access-bvr52\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.611437 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.611465 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.611494 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.611557 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.611581 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.612836 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vxdfs"] Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.613090 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" podUID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" containerName="dnsmasq-dns" containerID="cri-o://bb31cb678cc7c2b077ba027ae624b678852c055b20b84f1ef0bb6524f80ba78a" gracePeriod=10 Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.644838 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-jsbkl"] Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.646440 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.693129 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-jsbkl"] Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.715120 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.715320 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.715414 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.715553 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvr52\" (UniqueName: \"kubernetes.io/projected/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-kube-api-access-bvr52\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.715672 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.715753 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.718772 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.727953 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.728298 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.730796 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.738450 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.758527 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvr52\" (UniqueName: \"kubernetes.io/projected/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-kube-api-access-bvr52\") pod \"cinder-scheduler-0\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.791780 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.793754 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.803760 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.816789 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.816836 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.816868 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.820040 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrw8b\" (UniqueName: \"kubernetes.io/projected/2e12fa14-c592-4e14-8e7a-c02ee84cec72-kube-api-access-hrw8b\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.820197 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.820235 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-config\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.824633 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.849232 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925101 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925157 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-scripts\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925192 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925230 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-config\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925296 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925329 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925365 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925393 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data-custom\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925464 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrw8b\" (UniqueName: \"kubernetes.io/projected/2e12fa14-c592-4e14-8e7a-c02ee84cec72-kube-api-access-hrw8b\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925568 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nddvt\" (UniqueName: \"kubernetes.io/projected/97106034-e262-47a4-ae89-2bf1e9aa354f-kube-api-access-nddvt\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925598 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97106034-e262-47a4-ae89-2bf1e9aa354f-logs\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925632 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97106034-e262-47a4-ae89-2bf1e9aa354f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.925663 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.926134 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.926658 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-config\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.926852 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.927766 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.929885 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:22 crc kubenswrapper[4793]: I0130 14:07:22.945889 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrw8b\" (UniqueName: \"kubernetes.io/projected/2e12fa14-c592-4e14-8e7a-c02ee84cec72-kube-api-access-hrw8b\") pod \"dnsmasq-dns-795f4db4bc-jsbkl\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.028682 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddvt\" (UniqueName: \"kubernetes.io/projected/97106034-e262-47a4-ae89-2bf1e9aa354f-kube-api-access-nddvt\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.029021 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97106034-e262-47a4-ae89-2bf1e9aa354f-logs\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.029091 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97106034-e262-47a4-ae89-2bf1e9aa354f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.029127 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.029187 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-scripts\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.029216 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.029321 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data-custom\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.037359 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97106034-e262-47a4-ae89-2bf1e9aa354f-logs\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.037425 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97106034-e262-47a4-ae89-2bf1e9aa354f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.038210 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.039634 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-scripts\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.044229 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.046009 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data-custom\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.058459 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nddvt\" (UniqueName: \"kubernetes.io/projected/97106034-e262-47a4-ae89-2bf1e9aa354f-kube-api-access-nddvt\") pod \"cinder-api-0\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.127991 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.177467 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.338729 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.340372 4793 generic.go:334] "Generic (PLEG): container finished" podID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" containerID="bb31cb678cc7c2b077ba027ae624b678852c055b20b84f1ef0bb6524f80ba78a" exitCode=0 Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.340449 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" event={"ID":"3ed51218-5677-4c7a-aeb6-1ec6c215178a","Type":"ContainerDied","Data":"bb31cb678cc7c2b077ba027ae624b678852c055b20b84f1ef0bb6524f80ba78a"} Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.402678 4793 generic.go:334] "Generic (PLEG): container finished" podID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerID="b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433" exitCode=0 Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.402970 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.403009 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85d7b0d-5452-4175-842b-7d1505eb82e0","Type":"ContainerDied","Data":"b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433"} Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.403041 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f85d7b0d-5452-4175-842b-7d1505eb82e0","Type":"ContainerDied","Data":"50cb694f90f1d6a53f515af750afb638a61a81c6b156cbc3d6081c5686d9e08c"} Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.403096 4793 scope.go:117] "RemoveContainer" containerID="923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.403672 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.403837 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.475433 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-combined-ca-bundle\") pod \"f85d7b0d-5452-4175-842b-7d1505eb82e0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.484089 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-run-httpd\") pod \"f85d7b0d-5452-4175-842b-7d1505eb82e0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.485719 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-scripts\") pod \"f85d7b0d-5452-4175-842b-7d1505eb82e0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.487304 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-log-httpd\") pod \"f85d7b0d-5452-4175-842b-7d1505eb82e0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.488105 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sld6q\" (UniqueName: \"kubernetes.io/projected/f85d7b0d-5452-4175-842b-7d1505eb82e0-kube-api-access-sld6q\") pod \"f85d7b0d-5452-4175-842b-7d1505eb82e0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.485573 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f85d7b0d-5452-4175-842b-7d1505eb82e0" (UID: "f85d7b0d-5452-4175-842b-7d1505eb82e0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.488620 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f85d7b0d-5452-4175-842b-7d1505eb82e0" (UID: "f85d7b0d-5452-4175-842b-7d1505eb82e0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.490217 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-sg-core-conf-yaml\") pod \"f85d7b0d-5452-4175-842b-7d1505eb82e0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.494689 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-config-data\") pod \"f85d7b0d-5452-4175-842b-7d1505eb82e0\" (UID: \"f85d7b0d-5452-4175-842b-7d1505eb82e0\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.506075 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.506310 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85d7b0d-5452-4175-842b-7d1505eb82e0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.521437 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-scripts" (OuterVolumeSpecName: "scripts") pod "f85d7b0d-5452-4175-842b-7d1505eb82e0" (UID: "f85d7b0d-5452-4175-842b-7d1505eb82e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.559987 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85d7b0d-5452-4175-842b-7d1505eb82e0-kube-api-access-sld6q" (OuterVolumeSpecName: "kube-api-access-sld6q") pod "f85d7b0d-5452-4175-842b-7d1505eb82e0" (UID: "f85d7b0d-5452-4175-842b-7d1505eb82e0"). InnerVolumeSpecName "kube-api-access-sld6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.577937 4793 scope.go:117] "RemoveContainer" containerID="1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.608071 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.608104 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sld6q\" (UniqueName: \"kubernetes.io/projected/f85d7b0d-5452-4175-842b-7d1505eb82e0-kube-api-access-sld6q\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.713790 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f85d7b0d-5452-4175-842b-7d1505eb82e0" (UID: "f85d7b0d-5452-4175-842b-7d1505eb82e0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.722593 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f85d7b0d-5452-4175-842b-7d1505eb82e0" (UID: "f85d7b0d-5452-4175-842b-7d1505eb82e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.737386 4793 scope.go:117] "RemoveContainer" containerID="b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.739372 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.741304 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.750332 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.825136 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.829236 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-config-data" (OuterVolumeSpecName: "config-data") pod "f85d7b0d-5452-4175-842b-7d1505eb82e0" (UID: "f85d7b0d-5452-4175-842b-7d1505eb82e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.853981 4793 scope.go:117] "RemoveContainer" containerID="923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.854752 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d7b0d-5452-4175-842b-7d1505eb82e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:23 crc kubenswrapper[4793]: E0130 14:07:23.855034 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576\": container with ID starting with 923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576 not found: ID does not exist" containerID="923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.855075 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576"} err="failed to get container status \"923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576\": rpc error: code = NotFound desc = could not find container \"923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576\": container with ID starting with 923bfcaa8b201d83c945c539eb3e40c4b867d49112f8e3980340450f20e94576 not found: ID does not exist" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.855095 4793 scope.go:117] "RemoveContainer" containerID="1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b" Jan 30 14:07:23 crc kubenswrapper[4793]: E0130 14:07:23.856460 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b\": container with ID starting with 1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b not found: ID does not exist" containerID="1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.856483 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b"} err="failed to get container status \"1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b\": rpc error: code = NotFound desc = could not find container \"1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b\": container with ID starting with 1b1ea642d188771809a5b9b3e5272bd6c2f672734343d91a74e11b496f7e901b not found: ID does not exist" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.856499 4793 scope.go:117] "RemoveContainer" containerID="b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433" Jan 30 14:07:23 crc kubenswrapper[4793]: E0130 14:07:23.862923 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433\": container with ID starting with b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433 not found: ID does not exist" containerID="b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.862955 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433"} err="failed to get container status \"b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433\": rpc error: code = NotFound desc = could not find container \"b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433\": container with ID starting with b26ced91dadb4af6152807823abd299791ba6584a0d9e60752d98a108355f433 not found: ID does not exist" Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.955668 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-745tx\" (UniqueName: \"kubernetes.io/projected/3ed51218-5677-4c7a-aeb6-1ec6c215178a-kube-api-access-745tx\") pod \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.955809 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-svc\") pod \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.955870 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-swift-storage-0\") pod \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.955900 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-config\") pod \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.955941 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-sb\") pod \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.955974 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-nb\") pod \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\" (UID: \"3ed51218-5677-4c7a-aeb6-1ec6c215178a\") " Jan 30 14:07:23 crc kubenswrapper[4793]: I0130 14:07:23.984033 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ed51218-5677-4c7a-aeb6-1ec6c215178a-kube-api-access-745tx" (OuterVolumeSpecName: "kube-api-access-745tx") pod "3ed51218-5677-4c7a-aeb6-1ec6c215178a" (UID: "3ed51218-5677-4c7a-aeb6-1ec6c215178a"). InnerVolumeSpecName "kube-api-access-745tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.079684 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-745tx\" (UniqueName: \"kubernetes.io/projected/3ed51218-5677-4c7a-aeb6-1ec6c215178a-kube-api-access-745tx\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.089498 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-jsbkl"] Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.142855 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3ed51218-5677-4c7a-aeb6-1ec6c215178a" (UID: "3ed51218-5677-4c7a-aeb6-1ec6c215178a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.147499 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.153215 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ed51218-5677-4c7a-aeb6-1ec6c215178a" (UID: "3ed51218-5677-4c7a-aeb6-1ec6c215178a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.174140 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.184977 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.185014 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.188706 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:07:24 crc kubenswrapper[4793]: E0130 14:07:24.189137 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="sg-core" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189155 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="sg-core" Jan 30 14:07:24 crc kubenswrapper[4793]: E0130 14:07:24.189165 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" containerName="dnsmasq-dns" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189171 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" containerName="dnsmasq-dns" Jan 30 14:07:24 crc kubenswrapper[4793]: E0130 14:07:24.189186 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="ceilometer-notification-agent" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189193 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="ceilometer-notification-agent" Jan 30 14:07:24 crc kubenswrapper[4793]: E0130 14:07:24.189204 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="proxy-httpd" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189209 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="proxy-httpd" Jan 30 14:07:24 crc kubenswrapper[4793]: E0130 14:07:24.189219 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" containerName="init" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189224 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" containerName="init" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189391 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" containerName="dnsmasq-dns" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189406 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="ceilometer-notification-agent" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189414 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="sg-core" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.189426 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" containerName="proxy-httpd" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.191143 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.192023 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ed51218-5677-4c7a-aeb6-1ec6c215178a" (UID: "3ed51218-5677-4c7a-aeb6-1ec6c215178a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.194274 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.194350 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.203131 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.228166 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.232503 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-config" (OuterVolumeSpecName: "config") pod "3ed51218-5677-4c7a-aeb6-1ec6c215178a" (UID: "3ed51218-5677-4c7a-aeb6-1ec6c215178a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.286311 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.286378 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-log-httpd\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.286407 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-config-data\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.286438 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-scripts\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.289311 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ed51218-5677-4c7a-aeb6-1ec6c215178a" (UID: "3ed51218-5677-4c7a-aeb6-1ec6c215178a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.289427 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hlzq\" (UniqueName: \"kubernetes.io/projected/45c782cb-cc45-4785-bdff-d6d9e30389e8-kube-api-access-5hlzq\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.289464 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.289483 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-run-httpd\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.289983 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.290203 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.290374 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ed51218-5677-4c7a-aeb6-1ec6c215178a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.394665 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-log-httpd\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.394720 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-config-data\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.395280 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-log-httpd\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.395357 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-scripts\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.395732 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hlzq\" (UniqueName: \"kubernetes.io/projected/45c782cb-cc45-4785-bdff-d6d9e30389e8-kube-api-access-5hlzq\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.395778 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.395805 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-run-httpd\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.395977 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.398894 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-run-httpd\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.402732 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-config-data\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.406844 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.407061 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.408007 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-scripts\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.418861 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hlzq\" (UniqueName: \"kubernetes.io/projected/45c782cb-cc45-4785-bdff-d6d9e30389e8-kube-api-access-5hlzq\") pod \"ceilometer-0\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.424826 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85d7b0d-5452-4175-842b-7d1505eb82e0" path="/var/lib/kubelet/pods/f85d7b0d-5452-4175-842b-7d1505eb82e0/volumes" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.454468 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97106034-e262-47a4-ae89-2bf1e9aa354f","Type":"ContainerStarted","Data":"75a99447618824a28826d92bf0cd6be6c9e8089ca3fa2987920905ca99000ff1"} Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.455785 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" event={"ID":"2e12fa14-c592-4e14-8e7a-c02ee84cec72","Type":"ContainerStarted","Data":"dea9c67f4ab17b561d40848ccf607759778f130142a4dfee52cb6203cfd164a1"} Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.458190 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8","Type":"ContainerStarted","Data":"159c1470b0ba252efe02d67b50c8e7273c57baeaea595257f321b0b7be1d2fd8"} Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.462273 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.464111 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-vxdfs" event={"ID":"3ed51218-5677-4c7a-aeb6-1ec6c215178a","Type":"ContainerDied","Data":"30fb4318627919dfef7bd7d37dac82088ae21ede274e001c1e66cb82e9d4e95c"} Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.464165 4793 scope.go:117] "RemoveContainer" containerID="bb31cb678cc7c2b077ba027ae624b678852c055b20b84f1ef0bb6524f80ba78a" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.531644 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.596333 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vxdfs"] Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.614545 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vxdfs"] Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.835208 4793 scope.go:117] "RemoveContainer" containerID="86521a408e3d25c11a7337fcc940bc0bc142bbff9725007bee5f593d4d4fea8f" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.840300 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.840371 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.841110 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"f596f8243d020ebc541370451531edeb9f8ca985e2b5b436a6b072092db3b9f8"} pod="openstack/horizon-5b9fc5f8f6-nj7xv" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 14:07:24 crc kubenswrapper[4793]: I0130 14:07:24.841141 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" containerID="cri-o://f596f8243d020ebc541370451531edeb9f8ca985e2b5b436a6b072092db3b9f8" gracePeriod=30 Jan 30 14:07:25 crc kubenswrapper[4793]: I0130 14:07:25.500217 4793 generic.go:334] "Generic (PLEG): container finished" podID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" containerID="a550c028a717096d5e1912e30909f7370216f5f1ecf7d5091df70cd1de2ebf87" exitCode=0 Jan 30 14:07:25 crc kubenswrapper[4793]: I0130 14:07:25.500718 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" event={"ID":"2e12fa14-c592-4e14-8e7a-c02ee84cec72","Type":"ContainerDied","Data":"a550c028a717096d5e1912e30909f7370216f5f1ecf7d5091df70cd1de2ebf87"} Jan 30 14:07:25 crc kubenswrapper[4793]: I0130 14:07:25.666522 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:07:25 crc kubenswrapper[4793]: I0130 14:07:25.957378 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:26 crc kubenswrapper[4793]: I0130 14:07:26.411822 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ed51218-5677-4c7a-aeb6-1ec6c215178a" path="/var/lib/kubelet/pods/3ed51218-5677-4c7a-aeb6-1ec6c215178a/volumes" Jan 30 14:07:26 crc kubenswrapper[4793]: I0130 14:07:26.536488 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97106034-e262-47a4-ae89-2bf1e9aa354f","Type":"ContainerStarted","Data":"cbb9d373808ddc3a679132eab05b6ce25d5690657dca1f20d2fe727cd935b4fe"} Jan 30 14:07:26 crc kubenswrapper[4793]: I0130 14:07:26.538615 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" event={"ID":"2e12fa14-c592-4e14-8e7a-c02ee84cec72","Type":"ContainerStarted","Data":"4e43c7a23f4a490f4a7852a2f22ad1652b89482999fbd5408077c27f4ed89f64"} Jan 30 14:07:26 crc kubenswrapper[4793]: I0130 14:07:26.539707 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:26 crc kubenswrapper[4793]: I0130 14:07:26.541506 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerStarted","Data":"d21421b35db87347d4a7181c28d855890a9a721d97cf5be20f5f36330a91c466"} Jan 30 14:07:26 crc kubenswrapper[4793]: I0130 14:07:26.574365 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" podStartSLOduration=4.574342875 podStartE2EDuration="4.574342875s" podCreationTimestamp="2026-01-30 14:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:26.562280933 +0000 UTC m=+1457.263629434" watchObservedRunningTime="2026-01-30 14:07:26.574342875 +0000 UTC m=+1457.275691366" Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.527911 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-56c564fddb-9cbqg" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.578401 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97106034-e262-47a4-ae89-2bf1e9aa354f","Type":"ContainerStarted","Data":"bf72d5828d72d09872e6bebaabe95465abe1d8ff3c5a7138290d16c256939ff5"} Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.578561 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerName="cinder-api-log" containerID="cri-o://cbb9d373808ddc3a679132eab05b6ce25d5690657dca1f20d2fe727cd935b4fe" gracePeriod=30 Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.578784 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.579024 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerName="cinder-api" containerID="cri-o://bf72d5828d72d09872e6bebaabe95465abe1d8ff3c5a7138290d16c256939ff5" gracePeriod=30 Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.583394 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8","Type":"ContainerStarted","Data":"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4"} Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.583430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8","Type":"ContainerStarted","Data":"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156"} Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.615949 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.615929452 podStartE2EDuration="5.615929452s" podCreationTimestamp="2026-01-30 14:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:27.603220695 +0000 UTC m=+1458.304569196" watchObservedRunningTime="2026-01-30 14:07:27.615929452 +0000 UTC m=+1458.317277943" Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.634382 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.263232931 podStartE2EDuration="5.634361479s" podCreationTimestamp="2026-01-30 14:07:22 +0000 UTC" firstStartedPulling="2026-01-30 14:07:23.888487225 +0000 UTC m=+1454.589835716" lastFinishedPulling="2026-01-30 14:07:25.259615773 +0000 UTC m=+1455.960964264" observedRunningTime="2026-01-30 14:07:27.624388167 +0000 UTC m=+1458.325736658" watchObservedRunningTime="2026-01-30 14:07:27.634361479 +0000 UTC m=+1458.335709970" Jan 30 14:07:27 crc kubenswrapper[4793]: I0130 14:07:27.842575 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.624701 4793 generic.go:334] "Generic (PLEG): container finished" podID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerID="bf72d5828d72d09872e6bebaabe95465abe1d8ff3c5a7138290d16c256939ff5" exitCode=0 Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.627350 4793 generic.go:334] "Generic (PLEG): container finished" podID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerID="cbb9d373808ddc3a679132eab05b6ce25d5690657dca1f20d2fe727cd935b4fe" exitCode=143 Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.624954 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97106034-e262-47a4-ae89-2bf1e9aa354f","Type":"ContainerDied","Data":"bf72d5828d72d09872e6bebaabe95465abe1d8ff3c5a7138290d16c256939ff5"} Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.627645 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97106034-e262-47a4-ae89-2bf1e9aa354f","Type":"ContainerDied","Data":"cbb9d373808ddc3a679132eab05b6ce25d5690657dca1f20d2fe727cd935b4fe"} Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.630555 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerStarted","Data":"0f0a92b67bf2c57b29668defe80c5ef06174933a3389b63d549a0beeb9490672"} Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.767546 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.845725 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97106034-e262-47a4-ae89-2bf1e9aa354f-logs\") pod \"97106034-e262-47a4-ae89-2bf1e9aa354f\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.845804 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data\") pod \"97106034-e262-47a4-ae89-2bf1e9aa354f\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.845928 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97106034-e262-47a4-ae89-2bf1e9aa354f-etc-machine-id\") pod \"97106034-e262-47a4-ae89-2bf1e9aa354f\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.846017 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97106034-e262-47a4-ae89-2bf1e9aa354f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "97106034-e262-47a4-ae89-2bf1e9aa354f" (UID: "97106034-e262-47a4-ae89-2bf1e9aa354f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.846071 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nddvt\" (UniqueName: \"kubernetes.io/projected/97106034-e262-47a4-ae89-2bf1e9aa354f-kube-api-access-nddvt\") pod \"97106034-e262-47a4-ae89-2bf1e9aa354f\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.846116 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-scripts\") pod \"97106034-e262-47a4-ae89-2bf1e9aa354f\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.846172 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97106034-e262-47a4-ae89-2bf1e9aa354f-logs" (OuterVolumeSpecName: "logs") pod "97106034-e262-47a4-ae89-2bf1e9aa354f" (UID: "97106034-e262-47a4-ae89-2bf1e9aa354f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.846156 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data-custom\") pod \"97106034-e262-47a4-ae89-2bf1e9aa354f\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.846849 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-combined-ca-bundle\") pod \"97106034-e262-47a4-ae89-2bf1e9aa354f\" (UID: \"97106034-e262-47a4-ae89-2bf1e9aa354f\") " Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.847655 4793 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97106034-e262-47a4-ae89-2bf1e9aa354f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.847675 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97106034-e262-47a4-ae89-2bf1e9aa354f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.855703 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-scripts" (OuterVolumeSpecName: "scripts") pod "97106034-e262-47a4-ae89-2bf1e9aa354f" (UID: "97106034-e262-47a4-ae89-2bf1e9aa354f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.868211 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97106034-e262-47a4-ae89-2bf1e9aa354f-kube-api-access-nddvt" (OuterVolumeSpecName: "kube-api-access-nddvt") pod "97106034-e262-47a4-ae89-2bf1e9aa354f" (UID: "97106034-e262-47a4-ae89-2bf1e9aa354f"). InnerVolumeSpecName "kube-api-access-nddvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.871962 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "97106034-e262-47a4-ae89-2bf1e9aa354f" (UID: "97106034-e262-47a4-ae89-2bf1e9aa354f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.899218 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97106034-e262-47a4-ae89-2bf1e9aa354f" (UID: "97106034-e262-47a4-ae89-2bf1e9aa354f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.952277 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nddvt\" (UniqueName: \"kubernetes.io/projected/97106034-e262-47a4-ae89-2bf1e9aa354f-kube-api-access-nddvt\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.952316 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.952333 4793 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.952343 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:28 crc kubenswrapper[4793]: I0130 14:07:28.959530 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data" (OuterVolumeSpecName: "config-data") pod "97106034-e262-47a4-ae89-2bf1e9aa354f" (UID: "97106034-e262-47a4-ae89-2bf1e9aa354f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.053770 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97106034-e262-47a4-ae89-2bf1e9aa354f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.426767 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.618929 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.146:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8080: connect: connection refused" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.619309 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.620121 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"1a0edd78ac934a217d77619cfa86e0fdb058839606603994d0152ae52ba43266"} pod="openstack/horizon-6b66cd9fcf-c94kp" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.620174 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" containerID="cri-o://1a0edd78ac934a217d77619cfa86e0fdb058839606603994d0152ae52ba43266" gracePeriod=30 Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.720671 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97106034-e262-47a4-ae89-2bf1e9aa354f","Type":"ContainerDied","Data":"75a99447618824a28826d92bf0cd6be6c9e8089ca3fa2987920905ca99000ff1"} Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.720740 4793 scope.go:117] "RemoveContainer" containerID="bf72d5828d72d09872e6bebaabe95465abe1d8ff3c5a7138290d16c256939ff5" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.720911 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.751256 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerStarted","Data":"4199787f9fba9bfc02645d135d0bde12d6b02a89d6508f5d6cbf72ca7396c3a8"} Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.751301 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerStarted","Data":"1538087d2c16a6a8f0cfb34ccb93511ff0ccd4bdfcfc4ccc0a63b77916661e9e"} Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.757162 4793 scope.go:117] "RemoveContainer" containerID="cbb9d373808ddc3a679132eab05b6ce25d5690657dca1f20d2fe727cd935b4fe" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.768262 4793 generic.go:334] "Generic (PLEG): container finished" podID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerID="f596f8243d020ebc541370451531edeb9f8ca985e2b5b436a6b072092db3b9f8" exitCode=0 Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.769599 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9fc5f8f6-nj7xv" event={"ID":"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61","Type":"ContainerDied","Data":"f596f8243d020ebc541370451531edeb9f8ca985e2b5b436a6b072092db3b9f8"} Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.787109 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.800874 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.817034 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:29 crc kubenswrapper[4793]: E0130 14:07:29.817419 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerName="cinder-api-log" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.817435 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerName="cinder-api-log" Jan 30 14:07:29 crc kubenswrapper[4793]: E0130 14:07:29.817449 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerName="cinder-api" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.817455 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerName="cinder-api" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.817621 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerName="cinder-api-log" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.817651 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" containerName="cinder-api" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.818533 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.823322 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.823483 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.823512 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.837609 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.837659 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.853101 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868278 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcn8k\" (UniqueName: \"kubernetes.io/projected/3105dc9e-c178-4799-a658-044d4d9b8312-kube-api-access-xcn8k\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868333 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868380 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3105dc9e-c178-4799-a658-044d4d9b8312-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868395 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868446 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-scripts\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868472 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-config-data-custom\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868542 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3105dc9e-c178-4799-a658-044d4d9b8312-logs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868563 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-config-data\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.868613 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.969961 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-scripts\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.970003 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-config-data-custom\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.970038 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3105dc9e-c178-4799-a658-044d4d9b8312-logs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.970079 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-config-data\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.970115 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.970175 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcn8k\" (UniqueName: \"kubernetes.io/projected/3105dc9e-c178-4799-a658-044d4d9b8312-kube-api-access-xcn8k\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.970218 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.970261 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3105dc9e-c178-4799-a658-044d4d9b8312-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.970281 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.973851 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3105dc9e-c178-4799-a658-044d4d9b8312-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.976868 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3105dc9e-c178-4799-a658-044d4d9b8312-logs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.978992 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-config-data-custom\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.979521 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.982548 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-scripts\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.984623 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.986413 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:29 crc kubenswrapper[4793]: I0130 14:07:29.987255 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3105dc9e-c178-4799-a658-044d4d9b8312-config-data\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:30 crc kubenswrapper[4793]: I0130 14:07:30.003689 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcn8k\" (UniqueName: \"kubernetes.io/projected/3105dc9e-c178-4799-a658-044d4d9b8312-kube-api-access-xcn8k\") pod \"cinder-api-0\" (UID: \"3105dc9e-c178-4799-a658-044d4d9b8312\") " pod="openstack/cinder-api-0" Jan 30 14:07:30 crc kubenswrapper[4793]: I0130 14:07:30.150764 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 14:07:30 crc kubenswrapper[4793]: I0130 14:07:30.414389 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97106034-e262-47a4-ae89-2bf1e9aa354f" path="/var/lib/kubelet/pods/97106034-e262-47a4-ae89-2bf1e9aa354f/volumes" Jan 30 14:07:30 crc kubenswrapper[4793]: I0130 14:07:30.484275 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:30 crc kubenswrapper[4793]: I0130 14:07:30.773892 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 14:07:30 crc kubenswrapper[4793]: I0130 14:07:30.783497 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9fc5f8f6-nj7xv" event={"ID":"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61","Type":"ContainerStarted","Data":"640bbc01e45a92a5825f900300d9f0b8086fc19b1ea387177e59aeb60ff48a32"} Jan 30 14:07:31 crc kubenswrapper[4793]: I0130 14:07:31.840386 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3105dc9e-c178-4799-a658-044d4d9b8312","Type":"ContainerStarted","Data":"cd40f95368411b7b7624f6cefa1037a51682f45dcdf5aa9cdc5fd4b2cbe3b9b8"} Jan 30 14:07:31 crc kubenswrapper[4793]: I0130 14:07:31.840704 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3105dc9e-c178-4799-a658-044d4d9b8312","Type":"ContainerStarted","Data":"0ac28b1a3e02c47c2f66643e29bbde6de1d8f2d98e53eee6f58248806331ad3b"} Jan 30 14:07:32 crc kubenswrapper[4793]: I0130 14:07:32.852023 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerStarted","Data":"6314864eaec40aa342c30cbdd74ccf5a6317bae25e0440cf92e8eb60bfb0deb4"} Jan 30 14:07:32 crc kubenswrapper[4793]: I0130 14:07:32.853160 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 14:07:32 crc kubenswrapper[4793]: I0130 14:07:32.854292 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3105dc9e-c178-4799-a658-044d4d9b8312","Type":"ContainerStarted","Data":"145de7c0116031ea1a2a271f310eb429f2ca5d3d0cd2a37fed800d5cde00f3ce"} Jan 30 14:07:32 crc kubenswrapper[4793]: I0130 14:07:32.854489 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 14:07:32 crc kubenswrapper[4793]: I0130 14:07:32.899091 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.076168668 podStartE2EDuration="8.899074338s" podCreationTimestamp="2026-01-30 14:07:24 +0000 UTC" firstStartedPulling="2026-01-30 14:07:25.674222624 +0000 UTC m=+1456.375571125" lastFinishedPulling="2026-01-30 14:07:31.497128304 +0000 UTC m=+1462.198476795" observedRunningTime="2026-01-30 14:07:32.884295301 +0000 UTC m=+1463.585643802" watchObservedRunningTime="2026-01-30 14:07:32.899074338 +0000 UTC m=+1463.600422829" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.130142 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.151832 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.15181619 podStartE2EDuration="4.15181619s" podCreationTimestamp="2026-01-30 14:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:32.940916622 +0000 UTC m=+1463.642265113" watchObservedRunningTime="2026-01-30 14:07:33.15181619 +0000 UTC m=+1463.853164681" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.190582 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zbt8c"] Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.190808 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" podUID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerName="dnsmasq-dns" containerID="cri-o://43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630" gracePeriod=10 Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.404392 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.416986 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" podUID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.617458 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.668632 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.790821 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-577797dd7d-dhrt2" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.791728 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.868511 4793 generic.go:334] "Generic (PLEG): container finished" podID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerID="43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630" exitCode=0 Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.869701 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.870133 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerName="cinder-scheduler" containerID="cri-o://7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156" gracePeriod=30 Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.870510 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerName="probe" containerID="cri-o://8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4" gracePeriod=30 Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.870524 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" event={"ID":"b318d131-c8b9-41a5-a500-f8a9405e0074","Type":"ContainerDied","Data":"43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630"} Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.870669 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-zbt8c" event={"ID":"b318d131-c8b9-41a5-a500-f8a9405e0074","Type":"ContainerDied","Data":"de747f3964ebf14001721dc6443bbc5eded45594ed34eae45ced08a6517ebd85"} Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.870694 4793 scope.go:117] "RemoveContainer" containerID="43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.878523 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-56c564fddb-9cbqg"] Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.878740 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-56c564fddb-9cbqg" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api-log" containerID="cri-o://f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc" gracePeriod=30 Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.878818 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-56c564fddb-9cbqg" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api" containerID="cri-o://782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6" gracePeriod=30 Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.915094 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-swift-storage-0\") pod \"b318d131-c8b9-41a5-a500-f8a9405e0074\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.915179 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-sb\") pod \"b318d131-c8b9-41a5-a500-f8a9405e0074\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.915356 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-nb\") pod \"b318d131-c8b9-41a5-a500-f8a9405e0074\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.915494 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ptwm\" (UniqueName: \"kubernetes.io/projected/b318d131-c8b9-41a5-a500-f8a9405e0074-kube-api-access-6ptwm\") pod \"b318d131-c8b9-41a5-a500-f8a9405e0074\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.915525 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-svc\") pod \"b318d131-c8b9-41a5-a500-f8a9405e0074\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.915603 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-config\") pod \"b318d131-c8b9-41a5-a500-f8a9405e0074\" (UID: \"b318d131-c8b9-41a5-a500-f8a9405e0074\") " Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.925261 4793 scope.go:117] "RemoveContainer" containerID="8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d" Jan 30 14:07:33 crc kubenswrapper[4793]: I0130 14:07:33.960787 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b318d131-c8b9-41a5-a500-f8a9405e0074-kube-api-access-6ptwm" (OuterVolumeSpecName: "kube-api-access-6ptwm") pod "b318d131-c8b9-41a5-a500-f8a9405e0074" (UID: "b318d131-c8b9-41a5-a500-f8a9405e0074"). InnerVolumeSpecName "kube-api-access-6ptwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.024851 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ptwm\" (UniqueName: \"kubernetes.io/projected/b318d131-c8b9-41a5-a500-f8a9405e0074-kube-api-access-6ptwm\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.074073 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b318d131-c8b9-41a5-a500-f8a9405e0074" (UID: "b318d131-c8b9-41a5-a500-f8a9405e0074"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.083295 4793 scope.go:117] "RemoveContainer" containerID="43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630" Jan 30 14:07:34 crc kubenswrapper[4793]: E0130 14:07:34.085562 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630\": container with ID starting with 43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630 not found: ID does not exist" containerID="43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.085599 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630"} err="failed to get container status \"43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630\": rpc error: code = NotFound desc = could not find container \"43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630\": container with ID starting with 43395febb995dc111438464db00b3b9b05181d0334af6dba31c1c9291e5ad630 not found: ID does not exist" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.085619 4793 scope.go:117] "RemoveContainer" containerID="8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d" Jan 30 14:07:34 crc kubenswrapper[4793]: E0130 14:07:34.085971 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d\": container with ID starting with 8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d not found: ID does not exist" containerID="8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.086023 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d"} err="failed to get container status \"8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d\": rpc error: code = NotFound desc = could not find container \"8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d\": container with ID starting with 8a0dbacd16af734d4df166913fa44e633a22b0a758aa38edcc7a529d440a076d not found: ID does not exist" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.105149 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b318d131-c8b9-41a5-a500-f8a9405e0074" (UID: "b318d131-c8b9-41a5-a500-f8a9405e0074"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.118721 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b318d131-c8b9-41a5-a500-f8a9405e0074" (UID: "b318d131-c8b9-41a5-a500-f8a9405e0074"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.126335 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.126368 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.126378 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.129187 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b318d131-c8b9-41a5-a500-f8a9405e0074" (UID: "b318d131-c8b9-41a5-a500-f8a9405e0074"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.134548 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-config" (OuterVolumeSpecName: "config") pod "b318d131-c8b9-41a5-a500-f8a9405e0074" (UID: "b318d131-c8b9-41a5-a500-f8a9405e0074"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.242905 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.242978 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b318d131-c8b9-41a5-a500-f8a9405e0074-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.272303 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zbt8c"] Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.318504 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-zbt8c"] Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.408292 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b318d131-c8b9-41a5-a500-f8a9405e0074" path="/var/lib/kubelet/pods/b318d131-c8b9-41a5-a500-f8a9405e0074/volumes" Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.890458 4793 generic.go:334] "Generic (PLEG): container finished" podID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerID="f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc" exitCode=143 Jan 30 14:07:34 crc kubenswrapper[4793]: I0130 14:07:34.890505 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c564fddb-9cbqg" event={"ID":"a2288b37-d331-4c7e-b95d-13bb4987eb75","Type":"ContainerDied","Data":"f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc"} Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.305016 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.363210 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvr52\" (UniqueName: \"kubernetes.io/projected/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-kube-api-access-bvr52\") pod \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.363403 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data\") pod \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.363998 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-combined-ca-bundle\") pod \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.364035 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data-custom\") pod \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.364071 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-etc-machine-id\") pod \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.364118 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-scripts\") pod \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\" (UID: \"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8\") " Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.364634 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" (UID: "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.366201 4793 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.372430 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-kube-api-access-bvr52" (OuterVolumeSpecName: "kube-api-access-bvr52") pod "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" (UID: "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8"). InnerVolumeSpecName "kube-api-access-bvr52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.373088 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-scripts" (OuterVolumeSpecName: "scripts") pod "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" (UID: "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.385366 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" (UID: "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.458790 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" (UID: "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.469545 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.469600 4793 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.469613 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.469625 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvr52\" (UniqueName: \"kubernetes.io/projected/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-kube-api-access-bvr52\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.574184 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data" (OuterVolumeSpecName: "config-data") pod "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" (UID: "7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.674558 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.912395 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerID="8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4" exitCode=0 Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.912441 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerID="7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156" exitCode=0 Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.912451 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.912465 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8","Type":"ContainerDied","Data":"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4"} Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.912498 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8","Type":"ContainerDied","Data":"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156"} Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.912511 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8","Type":"ContainerDied","Data":"159c1470b0ba252efe02d67b50c8e7273c57baeaea595257f321b0b7be1d2fd8"} Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.912528 4793 scope.go:117] "RemoveContainer" containerID="8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.974809 4793 scope.go:117] "RemoveContainer" containerID="7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156" Jan 30 14:07:35 crc kubenswrapper[4793]: I0130 14:07:35.978838 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.001103 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.002730 4793 scope.go:117] "RemoveContainer" containerID="8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4" Jan 30 14:07:36 crc kubenswrapper[4793]: E0130 14:07:36.003131 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4\": container with ID starting with 8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4 not found: ID does not exist" containerID="8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.003176 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4"} err="failed to get container status \"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4\": rpc error: code = NotFound desc = could not find container \"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4\": container with ID starting with 8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4 not found: ID does not exist" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.003218 4793 scope.go:117] "RemoveContainer" containerID="7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156" Jan 30 14:07:36 crc kubenswrapper[4793]: E0130 14:07:36.003538 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156\": container with ID starting with 7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156 not found: ID does not exist" containerID="7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.003570 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156"} err="failed to get container status \"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156\": rpc error: code = NotFound desc = could not find container \"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156\": container with ID starting with 7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156 not found: ID does not exist" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.003591 4793 scope.go:117] "RemoveContainer" containerID="8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.007142 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4"} err="failed to get container status \"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4\": rpc error: code = NotFound desc = could not find container \"8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4\": container with ID starting with 8b22aba7c9d81a5adb4b976ca09883c49b8076399f0289ebb36c6aebdd7094a4 not found: ID does not exist" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.007174 4793 scope.go:117] "RemoveContainer" containerID="7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.007470 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156"} err="failed to get container status \"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156\": rpc error: code = NotFound desc = could not find container \"7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156\": container with ID starting with 7a5107a899af7a34e11127189b6298e87c8f1edab3f0e5f226f21de117501156 not found: ID does not exist" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.025120 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:36 crc kubenswrapper[4793]: E0130 14:07:36.025513 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerName="cinder-scheduler" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.025530 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerName="cinder-scheduler" Jan 30 14:07:36 crc kubenswrapper[4793]: E0130 14:07:36.025546 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerName="init" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.025553 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerName="init" Jan 30 14:07:36 crc kubenswrapper[4793]: E0130 14:07:36.025567 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerName="dnsmasq-dns" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.025574 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerName="dnsmasq-dns" Jan 30 14:07:36 crc kubenswrapper[4793]: E0130 14:07:36.025593 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerName="probe" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.025598 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerName="probe" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.025766 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerName="probe" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.025777 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b318d131-c8b9-41a5-a500-f8a9405e0074" containerName="dnsmasq-dns" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.025801 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" containerName="cinder-scheduler" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.026714 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.032282 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.033153 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.084640 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-scripts\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.084730 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p6dm\" (UniqueName: \"kubernetes.io/projected/83e26b73-5483-4b6c-88cd-5d794f14ef5a-kube-api-access-6p6dm\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.084785 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.084812 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83e26b73-5483-4b6c-88cd-5d794f14ef5a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.084842 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-config-data\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.084868 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.186095 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p6dm\" (UniqueName: \"kubernetes.io/projected/83e26b73-5483-4b6c-88cd-5d794f14ef5a-kube-api-access-6p6dm\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.186165 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.186190 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83e26b73-5483-4b6c-88cd-5d794f14ef5a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.186216 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-config-data\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.186242 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.186378 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-scripts\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.187135 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83e26b73-5483-4b6c-88cd-5d794f14ef5a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.191013 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-scripts\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.191645 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.192365 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-config-data\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.193749 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e26b73-5483-4b6c-88cd-5d794f14ef5a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.211749 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p6dm\" (UniqueName: \"kubernetes.io/projected/83e26b73-5483-4b6c-88cd-5d794f14ef5a-kube-api-access-6p6dm\") pod \"cinder-scheduler-0\" (UID: \"83e26b73-5483-4b6c-88cd-5d794f14ef5a\") " pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.363234 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.408110 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8" path="/var/lib/kubelet/pods/7a2766a0-68a6-4e1c-82ea-94ecfcae2ec8/volumes" Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.840380 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 14:07:36 crc kubenswrapper[4793]: W0130 14:07:36.852447 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83e26b73_5483_4b6c_88cd_5d794f14ef5a.slice/crio-4b17f6f61088e29fa61e37e7348dfb7c1a407afd8d8c7ca3fb800507639af008 WatchSource:0}: Error finding container 4b17f6f61088e29fa61e37e7348dfb7c1a407afd8d8c7ca3fb800507639af008: Status 404 returned error can't find the container with id 4b17f6f61088e29fa61e37e7348dfb7c1a407afd8d8c7ca3fb800507639af008 Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.930175 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"83e26b73-5483-4b6c-88cd-5d794f14ef5a","Type":"ContainerStarted","Data":"4b17f6f61088e29fa61e37e7348dfb7c1a407afd8d8c7ca3fb800507639af008"} Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.941331 4793 generic.go:334] "Generic (PLEG): container finished" podID="16a2a816-c28c-4d74-848a-2821a9d68d70" containerID="3517173292e25a5ef43fbeee36943507781e2a1f6b290f89494c3211b1e796ba" exitCode=0 Jan 30 14:07:36 crc kubenswrapper[4793]: I0130 14:07:36.941600 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9k2k7" event={"ID":"16a2a816-c28c-4d74-848a-2821a9d68d70","Type":"ContainerDied","Data":"3517173292e25a5ef43fbeee36943507781e2a1f6b290f89494c3211b1e796ba"} Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.357000 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56c564fddb-9cbqg" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:60306->10.217.0.158:9311: read: connection reset by peer" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.357574 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56c564fddb-9cbqg" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:60320->10.217.0.158:9311: read: connection reset by peer" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.779949 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.835530 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data-custom\") pod \"a2288b37-d331-4c7e-b95d-13bb4987eb75\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.835651 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-combined-ca-bundle\") pod \"a2288b37-d331-4c7e-b95d-13bb4987eb75\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.835728 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2288b37-d331-4c7e-b95d-13bb4987eb75-logs\") pod \"a2288b37-d331-4c7e-b95d-13bb4987eb75\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.835747 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data\") pod \"a2288b37-d331-4c7e-b95d-13bb4987eb75\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.835843 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zv94\" (UniqueName: \"kubernetes.io/projected/a2288b37-d331-4c7e-b95d-13bb4987eb75-kube-api-access-8zv94\") pod \"a2288b37-d331-4c7e-b95d-13bb4987eb75\" (UID: \"a2288b37-d331-4c7e-b95d-13bb4987eb75\") " Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.836359 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2288b37-d331-4c7e-b95d-13bb4987eb75-logs" (OuterVolumeSpecName: "logs") pod "a2288b37-d331-4c7e-b95d-13bb4987eb75" (UID: "a2288b37-d331-4c7e-b95d-13bb4987eb75"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.843836 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a2288b37-d331-4c7e-b95d-13bb4987eb75" (UID: "a2288b37-d331-4c7e-b95d-13bb4987eb75"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.846263 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2288b37-d331-4c7e-b95d-13bb4987eb75-kube-api-access-8zv94" (OuterVolumeSpecName: "kube-api-access-8zv94") pod "a2288b37-d331-4c7e-b95d-13bb4987eb75" (UID: "a2288b37-d331-4c7e-b95d-13bb4987eb75"). InnerVolumeSpecName "kube-api-access-8zv94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.867271 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2288b37-d331-4c7e-b95d-13bb4987eb75" (UID: "a2288b37-d331-4c7e-b95d-13bb4987eb75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.890442 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data" (OuterVolumeSpecName: "config-data") pod "a2288b37-d331-4c7e-b95d-13bb4987eb75" (UID: "a2288b37-d331-4c7e-b95d-13bb4987eb75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.941731 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2288b37-d331-4c7e-b95d-13bb4987eb75-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.941767 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.941779 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zv94\" (UniqueName: \"kubernetes.io/projected/a2288b37-d331-4c7e-b95d-13bb4987eb75-kube-api-access-8zv94\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.941795 4793 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:37 crc kubenswrapper[4793]: I0130 14:07:37.941810 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2288b37-d331-4c7e-b95d-13bb4987eb75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.011474 4793 generic.go:334] "Generic (PLEG): container finished" podID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerID="782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6" exitCode=0 Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.011557 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c564fddb-9cbqg" event={"ID":"a2288b37-d331-4c7e-b95d-13bb4987eb75","Type":"ContainerDied","Data":"782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6"} Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.011593 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c564fddb-9cbqg" event={"ID":"a2288b37-d331-4c7e-b95d-13bb4987eb75","Type":"ContainerDied","Data":"f97b2202fc16d2a3c18bd1abd87cac5c90aa96890b8132c11e4c4e9fbac70a09"} Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.011614 4793 scope.go:117] "RemoveContainer" containerID="782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.011755 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56c564fddb-9cbqg" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.026117 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"83e26b73-5483-4b6c-88cd-5d794f14ef5a","Type":"ContainerStarted","Data":"9f6bf51b0d3ae3ad5c4b17a445b1872a23a3e99c9b18205de5d2846bc10811e6"} Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.061499 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-56c564fddb-9cbqg"] Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.067575 4793 scope.go:117] "RemoveContainer" containerID="f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.068907 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-56c564fddb-9cbqg"] Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.132140 4793 scope.go:117] "RemoveContainer" containerID="782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6" Jan 30 14:07:38 crc kubenswrapper[4793]: E0130 14:07:38.132536 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6\": container with ID starting with 782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6 not found: ID does not exist" containerID="782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.132567 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6"} err="failed to get container status \"782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6\": rpc error: code = NotFound desc = could not find container \"782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6\": container with ID starting with 782631c644ee1b9a4f6e696f72f44200639bcdb901d4164311aa7466071988b6 not found: ID does not exist" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.132587 4793 scope.go:117] "RemoveContainer" containerID="f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc" Jan 30 14:07:38 crc kubenswrapper[4793]: E0130 14:07:38.132892 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc\": container with ID starting with f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc not found: ID does not exist" containerID="f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.132917 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc"} err="failed to get container status \"f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc\": rpc error: code = NotFound desc = could not find container \"f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc\": container with ID starting with f628e6f62b1db74106d61bb98b04bcfe6ac2c982a9d425388dc01c50f8a7dadc not found: ID does not exist" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.412428 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" path="/var/lib/kubelet/pods/a2288b37-d331-4c7e-b95d-13bb4987eb75/volumes" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.564461 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.652318 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb7n6\" (UniqueName: \"kubernetes.io/projected/16a2a816-c28c-4d74-848a-2821a9d68d70-kube-api-access-mb7n6\") pod \"16a2a816-c28c-4d74-848a-2821a9d68d70\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.652405 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-config\") pod \"16a2a816-c28c-4d74-848a-2821a9d68d70\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.652631 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-combined-ca-bundle\") pod \"16a2a816-c28c-4d74-848a-2821a9d68d70\" (UID: \"16a2a816-c28c-4d74-848a-2821a9d68d70\") " Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.679702 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16a2a816-c28c-4d74-848a-2821a9d68d70-kube-api-access-mb7n6" (OuterVolumeSpecName: "kube-api-access-mb7n6") pod "16a2a816-c28c-4d74-848a-2821a9d68d70" (UID: "16a2a816-c28c-4d74-848a-2821a9d68d70"). InnerVolumeSpecName "kube-api-access-mb7n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.684193 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16a2a816-c28c-4d74-848a-2821a9d68d70" (UID: "16a2a816-c28c-4d74-848a-2821a9d68d70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.703173 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-config" (OuterVolumeSpecName: "config") pod "16a2a816-c28c-4d74-848a-2821a9d68d70" (UID: "16a2a816-c28c-4d74-848a-2821a9d68d70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.754521 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.754553 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb7n6\" (UniqueName: \"kubernetes.io/projected/16a2a816-c28c-4d74-848a-2821a9d68d70-kube-api-access-mb7n6\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:38 crc kubenswrapper[4793]: I0130 14:07:38.754566 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/16a2a816-c28c-4d74-848a-2821a9d68d70-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.035750 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9k2k7" event={"ID":"16a2a816-c28c-4d74-848a-2821a9d68d70","Type":"ContainerDied","Data":"fc613fe2ad6c1be056bd77d206032a6320f75af4b1f9de343208058c0b3d8709"} Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.035794 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc613fe2ad6c1be056bd77d206032a6320f75af4b1f9de343208058c0b3d8709" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.035857 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9k2k7" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.045651 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"83e26b73-5483-4b6c-88cd-5d794f14ef5a","Type":"ContainerStarted","Data":"b933b510d8c79ac267ebb1c54b743d5617a150a4c0c6aa1255f3ea6f5c051ace"} Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.091233 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.091215941 podStartE2EDuration="4.091215941s" podCreationTimestamp="2026-01-30 14:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:39.090433101 +0000 UTC m=+1469.791781612" watchObservedRunningTime="2026-01-30 14:07:39.091215941 +0000 UTC m=+1469.792564432" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.146918 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t5wk9"] Jan 30 14:07:39 crc kubenswrapper[4793]: E0130 14:07:39.147370 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api-log" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.147388 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api-log" Jan 30 14:07:39 crc kubenswrapper[4793]: E0130 14:07:39.147401 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.147407 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api" Jan 30 14:07:39 crc kubenswrapper[4793]: E0130 14:07:39.147436 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a2a816-c28c-4d74-848a-2821a9d68d70" containerName="neutron-db-sync" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.147443 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a2a816-c28c-4d74-848a-2821a9d68d70" containerName="neutron-db-sync" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.147615 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.147635 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2288b37-d331-4c7e-b95d-13bb4987eb75" containerName="barbican-api-log" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.147653 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="16a2a816-c28c-4d74-848a-2821a9d68d70" containerName="neutron-db-sync" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.148596 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.207138 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t5wk9"] Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.251839 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75bd8998b8-27gd6"] Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.258133 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.274731 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.274989 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.275184 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.275336 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-brjvn" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.285909 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-config\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.286031 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzc2t\" (UniqueName: \"kubernetes.io/projected/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-kube-api-access-lzc2t\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.286068 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.286099 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.286156 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.286189 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.318126 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75bd8998b8-27gd6"] Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.388000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.388896 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-config\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.388848 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.388972 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-httpd-config\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.389082 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-combined-ca-bundle\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.389107 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.389125 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.389175 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-config\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.389221 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc2r7\" (UniqueName: \"kubernetes.io/projected/e26816b7-89ad-4885-b481-3ae7a8ab90c4-kube-api-access-vc2r7\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.389896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.389903 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.389971 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-ovndb-tls-certs\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.390010 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzc2t\" (UniqueName: \"kubernetes.io/projected/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-kube-api-access-lzc2t\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.390033 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.390304 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-config\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.390589 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.412862 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzc2t\" (UniqueName: \"kubernetes.io/projected/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-kube-api-access-lzc2t\") pod \"dnsmasq-dns-5c9776ccc5-t5wk9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.483951 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.491377 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc2r7\" (UniqueName: \"kubernetes.io/projected/e26816b7-89ad-4885-b481-3ae7a8ab90c4-kube-api-access-vc2r7\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.491463 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-ovndb-tls-certs\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.491517 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-config\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.491538 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-httpd-config\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.491598 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-combined-ca-bundle\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.496918 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-config\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.507533 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-combined-ca-bundle\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.507611 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-httpd-config\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.510606 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-ovndb-tls-certs\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.513198 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc2r7\" (UniqueName: \"kubernetes.io/projected/e26816b7-89ad-4885-b481-3ae7a8ab90c4-kube-api-access-vc2r7\") pod \"neutron-75bd8998b8-27gd6\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.581721 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:39 crc kubenswrapper[4793]: I0130 14:07:39.845178 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 14:07:40 crc kubenswrapper[4793]: I0130 14:07:40.088323 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t5wk9"] Jan 30 14:07:40 crc kubenswrapper[4793]: W0130 14:07:40.094838 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbe3cabf_7884_41df_adac_ad1bf7e76bf9.slice/crio-067cddf5e14c681c5ac59422d446368a0d6a95f771b27ce5c72d8b49b5b509a7 WatchSource:0}: Error finding container 067cddf5e14c681c5ac59422d446368a0d6a95f771b27ce5c72d8b49b5b509a7: Status 404 returned error can't find the container with id 067cddf5e14c681c5ac59422d446368a0d6a95f771b27ce5c72d8b49b5b509a7 Jan 30 14:07:40 crc kubenswrapper[4793]: I0130 14:07:40.330925 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75bd8998b8-27gd6"] Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.065303 4793 generic.go:334] "Generic (PLEG): container finished" podID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerID="b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74" exitCode=0 Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.065398 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" event={"ID":"bbe3cabf-7884-41df-adac-ad1bf7e76bf9","Type":"ContainerDied","Data":"b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74"} Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.065689 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" event={"ID":"bbe3cabf-7884-41df-adac-ad1bf7e76bf9","Type":"ContainerStarted","Data":"067cddf5e14c681c5ac59422d446368a0d6a95f771b27ce5c72d8b49b5b509a7"} Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.067669 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd8998b8-27gd6" event={"ID":"e26816b7-89ad-4885-b481-3ae7a8ab90c4","Type":"ContainerStarted","Data":"aa6b97f9cf7eb4c606a580dd2ddef97d729ceaa61803153f00581b30e2022da8"} Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.067721 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd8998b8-27gd6" event={"ID":"e26816b7-89ad-4885-b481-3ae7a8ab90c4","Type":"ContainerStarted","Data":"9527fe1780f2fb9cca80bad053f2c7ec761fbbe892d439d87f943245f4fb87c3"} Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.067735 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd8998b8-27gd6" event={"ID":"e26816b7-89ad-4885-b481-3ae7a8ab90c4","Type":"ContainerStarted","Data":"0c2d21afdba7970d61ae9dcca3d44a8ee8d119daf524bd616f6bfe333ace90f3"} Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.067852 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.156831 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75bd8998b8-27gd6" podStartSLOduration=2.156815069 podStartE2EDuration="2.156815069s" podCreationTimestamp="2026-01-30 14:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:41.109872802 +0000 UTC m=+1471.811221323" watchObservedRunningTime="2026-01-30 14:07:41.156815069 +0000 UTC m=+1471.858163560" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.369470 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.653454 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-668ffd44cc-lhns4"] Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.659420 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.663492 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.663644 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.678740 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-668ffd44cc-lhns4"] Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.760397 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbsns\" (UniqueName: \"kubernetes.io/projected/d9f34138-4dce-415b-ad20-cf0ba588f012-kube-api-access-cbsns\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.760471 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-internal-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.760494 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-ovndb-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.760525 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-combined-ca-bundle\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.760550 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-public-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.760605 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-httpd-config\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.760640 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-config\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.862074 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbsns\" (UniqueName: \"kubernetes.io/projected/d9f34138-4dce-415b-ad20-cf0ba588f012-kube-api-access-cbsns\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.862160 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-internal-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.862181 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-ovndb-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.862215 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-combined-ca-bundle\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.862239 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-public-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.862292 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-httpd-config\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.862315 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-config\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.869705 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-ovndb-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.869764 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-combined-ca-bundle\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.869931 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-internal-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.872817 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-httpd-config\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.881487 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-config\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.894729 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f34138-4dce-415b-ad20-cf0ba588f012-public-tls-certs\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:41 crc kubenswrapper[4793]: I0130 14:07:41.901809 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbsns\" (UniqueName: \"kubernetes.io/projected/d9f34138-4dce-415b-ad20-cf0ba588f012-kube-api-access-cbsns\") pod \"neutron-668ffd44cc-lhns4\" (UID: \"d9f34138-4dce-415b-ad20-cf0ba588f012\") " pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:42 crc kubenswrapper[4793]: I0130 14:07:42.019561 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:42 crc kubenswrapper[4793]: I0130 14:07:42.082967 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" event={"ID":"bbe3cabf-7884-41df-adac-ad1bf7e76bf9","Type":"ContainerStarted","Data":"b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9"} Jan 30 14:07:42 crc kubenswrapper[4793]: I0130 14:07:42.083616 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:42 crc kubenswrapper[4793]: I0130 14:07:42.110005 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" podStartSLOduration=3.109983014 podStartE2EDuration="3.109983014s" podCreationTimestamp="2026-01-30 14:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:42.107359461 +0000 UTC m=+1472.808707952" watchObservedRunningTime="2026-01-30 14:07:42.109983014 +0000 UTC m=+1472.811331505" Jan 30 14:07:42 crc kubenswrapper[4793]: I0130 14:07:42.679884 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-668ffd44cc-lhns4"] Jan 30 14:07:43 crc kubenswrapper[4793]: I0130 14:07:43.118872 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-668ffd44cc-lhns4" event={"ID":"d9f34138-4dce-415b-ad20-cf0ba588f012","Type":"ContainerStarted","Data":"def0e09d8215d1128f3b8d9e2dff0f499eba944c2fe283c8b19da86a92134de3"} Jan 30 14:07:43 crc kubenswrapper[4793]: I0130 14:07:43.119599 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-668ffd44cc-lhns4" event={"ID":"d9f34138-4dce-415b-ad20-cf0ba588f012","Type":"ContainerStarted","Data":"b806303ea738519210a64a9d9989bb78f1b45eb8b172fb4de474e0bcd077ca0e"} Jan 30 14:07:43 crc kubenswrapper[4793]: I0130 14:07:43.675412 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-d689db86f-zslsz" Jan 30 14:07:44 crc kubenswrapper[4793]: I0130 14:07:44.121430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-668ffd44cc-lhns4" event={"ID":"d9f34138-4dce-415b-ad20-cf0ba588f012","Type":"ContainerStarted","Data":"f9985449191b4ffcd31221b22a2f985848c73964cf8516d53b7c455eec2eaab5"} Jan 30 14:07:44 crc kubenswrapper[4793]: I0130 14:07:44.121793 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:07:44 crc kubenswrapper[4793]: I0130 14:07:44.145669 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-668ffd44cc-lhns4" podStartSLOduration=3.145650398 podStartE2EDuration="3.145650398s" podCreationTimestamp="2026-01-30 14:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:07:44.14080234 +0000 UTC m=+1474.842150831" watchObservedRunningTime="2026-01-30 14:07:44.145650398 +0000 UTC m=+1474.846998889" Jan 30 14:07:44 crc kubenswrapper[4793]: I0130 14:07:44.163297 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="3105dc9e-c178-4799-a658-044d4d9b8312" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.164:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:07:44 crc kubenswrapper[4793]: I0130 14:07:44.377028 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.448672 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.450113 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.452724 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.453546 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.454158 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-68q9f" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.467003 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.542527 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.542592 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f6hs\" (UniqueName: \"kubernetes.io/projected/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-kube-api-access-6f6hs\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.542658 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-openstack-config-secret\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.542691 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-openstack-config\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.644227 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-openstack-config-secret\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.644283 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-openstack-config\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.644410 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.644436 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f6hs\" (UniqueName: \"kubernetes.io/projected/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-kube-api-access-6f6hs\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.645592 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-openstack-config\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.650197 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-openstack-config-secret\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.652398 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.677666 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f6hs\" (UniqueName: \"kubernetes.io/projected/dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7-kube-api-access-6f6hs\") pod \"openstackclient\" (UID: \"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7\") " pod="openstack/openstackclient" Jan 30 14:07:45 crc kubenswrapper[4793]: I0130 14:07:45.777628 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 14:07:46 crc kubenswrapper[4793]: I0130 14:07:46.675982 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 14:07:46 crc kubenswrapper[4793]: I0130 14:07:46.810011 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:46 crc kubenswrapper[4793]: I0130 14:07:46.820042 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-65f95549b8-wtpxl" Jan 30 14:07:46 crc kubenswrapper[4793]: I0130 14:07:46.843762 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 14:07:47 crc kubenswrapper[4793]: I0130 14:07:47.151419 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7","Type":"ContainerStarted","Data":"c117e8966984d1742423ebc29fafde41dbe7cdc75011c22f88b7b683046118f8"} Jan 30 14:07:49 crc kubenswrapper[4793]: I0130 14:07:49.169250 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="3105dc9e-c178-4799-a658-044d4d9b8312" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.164:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:07:49 crc kubenswrapper[4793]: I0130 14:07:49.486380 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:07:49 crc kubenswrapper[4793]: I0130 14:07:49.538843 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-jsbkl"] Jan 30 14:07:49 crc kubenswrapper[4793]: I0130 14:07:49.539088 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" podUID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" containerName="dnsmasq-dns" containerID="cri-o://4e43c7a23f4a490f4a7852a2f22ad1652b89482999fbd5408077c27f4ed89f64" gracePeriod=10 Jan 30 14:07:49 crc kubenswrapper[4793]: I0130 14:07:49.838066 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.189801 4793 generic.go:334] "Generic (PLEG): container finished" podID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" containerID="4e43c7a23f4a490f4a7852a2f22ad1652b89482999fbd5408077c27f4ed89f64" exitCode=0 Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.189872 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" event={"ID":"2e12fa14-c592-4e14-8e7a-c02ee84cec72","Type":"ContainerDied","Data":"4e43c7a23f4a490f4a7852a2f22ad1652b89482999fbd5408077c27f4ed89f64"} Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.190146 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" event={"ID":"2e12fa14-c592-4e14-8e7a-c02ee84cec72","Type":"ContainerDied","Data":"dea9c67f4ab17b561d40848ccf607759778f130142a4dfee52cb6203cfd164a1"} Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.190159 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dea9c67f4ab17b561d40848ccf607759778f130142a4dfee52cb6203cfd164a1" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.236821 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.346771 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-svc\") pod \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.346821 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-config\") pod \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.346959 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrw8b\" (UniqueName: \"kubernetes.io/projected/2e12fa14-c592-4e14-8e7a-c02ee84cec72-kube-api-access-hrw8b\") pod \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.346989 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-sb\") pod \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.347039 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-swift-storage-0\") pod \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.347118 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-nb\") pod \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\" (UID: \"2e12fa14-c592-4e14-8e7a-c02ee84cec72\") " Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.375302 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e12fa14-c592-4e14-8e7a-c02ee84cec72-kube-api-access-hrw8b" (OuterVolumeSpecName: "kube-api-access-hrw8b") pod "2e12fa14-c592-4e14-8e7a-c02ee84cec72" (UID: "2e12fa14-c592-4e14-8e7a-c02ee84cec72"). InnerVolumeSpecName "kube-api-access-hrw8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.440239 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2e12fa14-c592-4e14-8e7a-c02ee84cec72" (UID: "2e12fa14-c592-4e14-8e7a-c02ee84cec72"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.474405 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.474701 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrw8b\" (UniqueName: \"kubernetes.io/projected/2e12fa14-c592-4e14-8e7a-c02ee84cec72-kube-api-access-hrw8b\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.513267 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2e12fa14-c592-4e14-8e7a-c02ee84cec72" (UID: "2e12fa14-c592-4e14-8e7a-c02ee84cec72"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.534600 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-config" (OuterVolumeSpecName: "config") pod "2e12fa14-c592-4e14-8e7a-c02ee84cec72" (UID: "2e12fa14-c592-4e14-8e7a-c02ee84cec72"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.541996 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2e12fa14-c592-4e14-8e7a-c02ee84cec72" (UID: "2e12fa14-c592-4e14-8e7a-c02ee84cec72"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.566623 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2e12fa14-c592-4e14-8e7a-c02ee84cec72" (UID: "2e12fa14-c592-4e14-8e7a-c02ee84cec72"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.576189 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.576227 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.576238 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:50 crc kubenswrapper[4793]: I0130 14:07:50.576247 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e12fa14-c592-4e14-8e7a-c02ee84cec72-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:07:51 crc kubenswrapper[4793]: I0130 14:07:51.207777 4793 generic.go:334] "Generic (PLEG): container finished" podID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerID="1a0edd78ac934a217d77619cfa86e0fdb058839606603994d0152ae52ba43266" exitCode=1 Jan 30 14:07:51 crc kubenswrapper[4793]: I0130 14:07:51.208073 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-jsbkl" Jan 30 14:07:51 crc kubenswrapper[4793]: I0130 14:07:51.209288 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerDied","Data":"1a0edd78ac934a217d77619cfa86e0fdb058839606603994d0152ae52ba43266"} Jan 30 14:07:51 crc kubenswrapper[4793]: I0130 14:07:51.209328 4793 scope.go:117] "RemoveContainer" containerID="dff5cd3a5cfaef3ae4c87e55c3563d4578820a2c23ec2494ebf248940d3816d8" Jan 30 14:07:51 crc kubenswrapper[4793]: I0130 14:07:51.344680 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-jsbkl"] Jan 30 14:07:51 crc kubenswrapper[4793]: I0130 14:07:51.351964 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-jsbkl"] Jan 30 14:07:52 crc kubenswrapper[4793]: I0130 14:07:52.217570 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerStarted","Data":"e1ee447c1da4c22c8a8e3defd94a820c3fc867c7dfc1d7bd5bb248fe0d49e002"} Jan 30 14:07:52 crc kubenswrapper[4793]: I0130 14:07:52.409508 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" path="/var/lib/kubelet/pods/2e12fa14-c592-4e14-8e7a-c02ee84cec72/volumes" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.234913 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7767cf976c-8m6hn"] Jan 30 14:07:54 crc kubenswrapper[4793]: E0130 14:07:54.239478 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" containerName="init" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.239496 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" containerName="init" Jan 30 14:07:54 crc kubenswrapper[4793]: E0130 14:07:54.239525 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" containerName="dnsmasq-dns" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.239531 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" containerName="dnsmasq-dns" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.239692 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e12fa14-c592-4e14-8e7a-c02ee84cec72" containerName="dnsmasq-dns" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.240753 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.244776 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.244974 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.245125 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.256377 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7767cf976c-8m6hn"] Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.351311 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbwgt\" (UniqueName: \"kubernetes.io/projected/de3851c3-345e-41a1-ad9e-ee3f4e357d85-kube-api-access-cbwgt\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.351354 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-config-data\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.351424 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-internal-tls-certs\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.351557 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-combined-ca-bundle\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.351768 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-public-tls-certs\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.351885 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de3851c3-345e-41a1-ad9e-ee3f4e357d85-run-httpd\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.351912 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de3851c3-345e-41a1-ad9e-ee3f4e357d85-log-httpd\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.351946 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de3851c3-345e-41a1-ad9e-ee3f4e357d85-etc-swift\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.453819 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-config-data\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.453914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-internal-tls-certs\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.454003 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-combined-ca-bundle\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.454075 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-public-tls-certs\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.454106 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de3851c3-345e-41a1-ad9e-ee3f4e357d85-run-httpd\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.454121 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de3851c3-345e-41a1-ad9e-ee3f4e357d85-log-httpd\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.454140 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de3851c3-345e-41a1-ad9e-ee3f4e357d85-etc-swift\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.454156 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbwgt\" (UniqueName: \"kubernetes.io/projected/de3851c3-345e-41a1-ad9e-ee3f4e357d85-kube-api-access-cbwgt\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.455016 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de3851c3-345e-41a1-ad9e-ee3f4e357d85-run-httpd\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.459160 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de3851c3-345e-41a1-ad9e-ee3f4e357d85-log-httpd\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.462229 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de3851c3-345e-41a1-ad9e-ee3f4e357d85-etc-swift\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.473831 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-config-data\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.474409 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-internal-tls-certs\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.474913 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-public-tls-certs\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.479581 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3851c3-345e-41a1-ad9e-ee3f4e357d85-combined-ca-bundle\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.480729 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbwgt\" (UniqueName: \"kubernetes.io/projected/de3851c3-345e-41a1-ad9e-ee3f4e357d85-kube-api-access-cbwgt\") pod \"swift-proxy-7767cf976c-8m6hn\" (UID: \"de3851c3-345e-41a1-ad9e-ee3f4e357d85\") " pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.560468 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.562501 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 14:07:54 crc kubenswrapper[4793]: I0130 14:07:54.566033 4793 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod3ed51218-5677-4c7a-aeb6-1ec6c215178a"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod3ed51218-5677-4c7a-aeb6-1ec6c215178a] : Timed out while waiting for systemd to remove kubepods-besteffort-pod3ed51218_5677_4c7a_aeb6_1ec6c215178a.slice" Jan 30 14:07:55 crc kubenswrapper[4793]: I0130 14:07:55.764253 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:07:55 crc kubenswrapper[4793]: I0130 14:07:55.765037 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="sg-core" containerID="cri-o://4199787f9fba9bfc02645d135d0bde12d6b02a89d6508f5d6cbf72ca7396c3a8" gracePeriod=30 Jan 30 14:07:55 crc kubenswrapper[4793]: I0130 14:07:55.765157 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="proxy-httpd" containerID="cri-o://6314864eaec40aa342c30cbdd74ccf5a6317bae25e0440cf92e8eb60bfb0deb4" gracePeriod=30 Jan 30 14:07:55 crc kubenswrapper[4793]: I0130 14:07:55.765462 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="ceilometer-notification-agent" containerID="cri-o://1538087d2c16a6a8f0cfb34ccb93511ff0ccd4bdfcfc4ccc0a63b77916661e9e" gracePeriod=30 Jan 30 14:07:55 crc kubenswrapper[4793]: I0130 14:07:55.764730 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="ceilometer-central-agent" containerID="cri-o://0f0a92b67bf2c57b29668defe80c5ef06174933a3389b63d549a0beeb9490672" gracePeriod=30 Jan 30 14:07:56 crc kubenswrapper[4793]: I0130 14:07:56.268070 4793 generic.go:334] "Generic (PLEG): container finished" podID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerID="6314864eaec40aa342c30cbdd74ccf5a6317bae25e0440cf92e8eb60bfb0deb4" exitCode=0 Jan 30 14:07:56 crc kubenswrapper[4793]: I0130 14:07:56.268083 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerDied","Data":"6314864eaec40aa342c30cbdd74ccf5a6317bae25e0440cf92e8eb60bfb0deb4"} Jan 30 14:07:56 crc kubenswrapper[4793]: I0130 14:07:56.268104 4793 generic.go:334] "Generic (PLEG): container finished" podID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerID="4199787f9fba9bfc02645d135d0bde12d6b02a89d6508f5d6cbf72ca7396c3a8" exitCode=2 Jan 30 14:07:56 crc kubenswrapper[4793]: I0130 14:07:56.268127 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerDied","Data":"4199787f9fba9bfc02645d135d0bde12d6b02a89d6508f5d6cbf72ca7396c3a8"} Jan 30 14:07:57 crc kubenswrapper[4793]: I0130 14:07:57.280786 4793 generic.go:334] "Generic (PLEG): container finished" podID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerID="0f0a92b67bf2c57b29668defe80c5ef06174933a3389b63d549a0beeb9490672" exitCode=0 Jan 30 14:07:57 crc kubenswrapper[4793]: I0130 14:07:57.280956 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerDied","Data":"0f0a92b67bf2c57b29668defe80c5ef06174933a3389b63d549a0beeb9490672"} Jan 30 14:07:59 crc kubenswrapper[4793]: I0130 14:07:59.300711 4793 generic.go:334] "Generic (PLEG): container finished" podID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerID="1538087d2c16a6a8f0cfb34ccb93511ff0ccd4bdfcfc4ccc0a63b77916661e9e" exitCode=0 Jan 30 14:07:59 crc kubenswrapper[4793]: I0130 14:07:59.300790 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerDied","Data":"1538087d2c16a6a8f0cfb34ccb93511ff0ccd4bdfcfc4ccc0a63b77916661e9e"} Jan 30 14:07:59 crc kubenswrapper[4793]: I0130 14:07:59.608740 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:07:59 crc kubenswrapper[4793]: I0130 14:07:59.608802 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:08:04 crc kubenswrapper[4793]: I0130 14:08:04.838280 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:08:04 crc kubenswrapper[4793]: I0130 14:08:04.838824 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:08:04 crc kubenswrapper[4793]: I0130 14:08:04.839666 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"640bbc01e45a92a5825f900300d9f0b8086fc19b1ea387177e59aeb60ff48a32"} pod="openstack/horizon-5b9fc5f8f6-nj7xv" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 14:08:04 crc kubenswrapper[4793]: I0130 14:08:04.839709 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" containerID="cri-o://640bbc01e45a92a5825f900300d9f0b8086fc19b1ea387177e59aeb60ff48a32" gracePeriod=30 Jan 30 14:08:05 crc kubenswrapper[4793]: I0130 14:08:05.690026 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:08:05 crc kubenswrapper[4793]: I0130 14:08:05.691019 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerName="glance-httpd" containerID="cri-o://031f50784319cac124ddf65fb3b891ec178d8cabb6114ad6fed4b24cfd5aa170" gracePeriod=30 Jan 30 14:08:05 crc kubenswrapper[4793]: I0130 14:08:05.691019 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerName="glance-log" containerID="cri-o://dcaeea7ba1cea9514200e8739efe0c1afeee2c3dce2b9b6f14b9679193172dd8" gracePeriod=30 Jan 30 14:08:05 crc kubenswrapper[4793]: E0130 14:08:05.993413 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 30 14:08:05 crc kubenswrapper[4793]: E0130 14:08:05.993819 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cdh694h594hb8h5f7h79h544h6h5b9h64ch656h9ch55h58dh585h5dh565h75h5c6h65hc9hffh7h664h5c4h5bch678h95hb7hd6h5c6h75q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6f6hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:08:05 crc kubenswrapper[4793]: E0130 14:08:05.995179 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.386941 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45c782cb-cc45-4785-bdff-d6d9e30389e8","Type":"ContainerDied","Data":"d21421b35db87347d4a7181c28d855890a9a721d97cf5be20f5f36330a91c466"} Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.387327 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d21421b35db87347d4a7181c28d855890a9a721d97cf5be20f5f36330a91c466" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.390746 4793 generic.go:334] "Generic (PLEG): container finished" podID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerID="dcaeea7ba1cea9514200e8739efe0c1afeee2c3dce2b9b6f14b9679193172dd8" exitCode=143 Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.392169 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5559c03d-3177-4b79-9d5b-4272abb3332c","Type":"ContainerDied","Data":"dcaeea7ba1cea9514200e8739efe0c1afeee2c3dce2b9b6f14b9679193172dd8"} Jan 30 14:08:06 crc kubenswrapper[4793]: E0130 14:08:06.395231 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.454609 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.578886 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-config-data\") pod \"45c782cb-cc45-4785-bdff-d6d9e30389e8\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.579159 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-log-httpd\") pod \"45c782cb-cc45-4785-bdff-d6d9e30389e8\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.579351 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hlzq\" (UniqueName: \"kubernetes.io/projected/45c782cb-cc45-4785-bdff-d6d9e30389e8-kube-api-access-5hlzq\") pod \"45c782cb-cc45-4785-bdff-d6d9e30389e8\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.579824 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-sg-core-conf-yaml\") pod \"45c782cb-cc45-4785-bdff-d6d9e30389e8\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.579960 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-run-httpd\") pod \"45c782cb-cc45-4785-bdff-d6d9e30389e8\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.580131 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-combined-ca-bundle\") pod \"45c782cb-cc45-4785-bdff-d6d9e30389e8\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.580205 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-scripts\") pod \"45c782cb-cc45-4785-bdff-d6d9e30389e8\" (UID: \"45c782cb-cc45-4785-bdff-d6d9e30389e8\") " Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.580803 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "45c782cb-cc45-4785-bdff-d6d9e30389e8" (UID: "45c782cb-cc45-4785-bdff-d6d9e30389e8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.581711 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "45c782cb-cc45-4785-bdff-d6d9e30389e8" (UID: "45c782cb-cc45-4785-bdff-d6d9e30389e8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.587867 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45c782cb-cc45-4785-bdff-d6d9e30389e8-kube-api-access-5hlzq" (OuterVolumeSpecName: "kube-api-access-5hlzq") pod "45c782cb-cc45-4785-bdff-d6d9e30389e8" (UID: "45c782cb-cc45-4785-bdff-d6d9e30389e8"). InnerVolumeSpecName "kube-api-access-5hlzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.597354 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-scripts" (OuterVolumeSpecName: "scripts") pod "45c782cb-cc45-4785-bdff-d6d9e30389e8" (UID: "45c782cb-cc45-4785-bdff-d6d9e30389e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.667713 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "45c782cb-cc45-4785-bdff-d6d9e30389e8" (UID: "45c782cb-cc45-4785-bdff-d6d9e30389e8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.681836 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.682225 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.682353 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45c782cb-cc45-4785-bdff-d6d9e30389e8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.682416 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hlzq\" (UniqueName: \"kubernetes.io/projected/45c782cb-cc45-4785-bdff-d6d9e30389e8-kube-api-access-5hlzq\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.682478 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.688686 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7767cf976c-8m6hn"] Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.707246 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45c782cb-cc45-4785-bdff-d6d9e30389e8" (UID: "45c782cb-cc45-4785-bdff-d6d9e30389e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.711256 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-config-data" (OuterVolumeSpecName: "config-data") pod "45c782cb-cc45-4785-bdff-d6d9e30389e8" (UID: "45c782cb-cc45-4785-bdff-d6d9e30389e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.784328 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:06 crc kubenswrapper[4793]: I0130 14:08:06.784482 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45c782cb-cc45-4785-bdff-d6d9e30389e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.363935 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-k8j4t"] Jan 30 14:08:07 crc kubenswrapper[4793]: E0130 14:08:07.364371 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="sg-core" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.364390 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="sg-core" Jan 30 14:08:07 crc kubenswrapper[4793]: E0130 14:08:07.364408 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="ceilometer-central-agent" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.364417 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="ceilometer-central-agent" Jan 30 14:08:07 crc kubenswrapper[4793]: E0130 14:08:07.364440 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="proxy-httpd" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.364447 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="proxy-httpd" Jan 30 14:08:07 crc kubenswrapper[4793]: E0130 14:08:07.364472 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="ceilometer-notification-agent" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.364481 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="ceilometer-notification-agent" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.364713 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="ceilometer-notification-agent" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.364729 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="sg-core" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.364755 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="proxy-httpd" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.364768 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" containerName="ceilometer-central-agent" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.365483 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.377230 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-k8j4t"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.418496 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.425133 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7767cf976c-8m6hn" event={"ID":"de3851c3-345e-41a1-ad9e-ee3f4e357d85","Type":"ContainerStarted","Data":"2530debb883c8718264ad859e9a7e4a811aa1f43db904ffcb018cbaf3181cc82"} Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.425206 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7767cf976c-8m6hn" event={"ID":"de3851c3-345e-41a1-ad9e-ee3f4e357d85","Type":"ContainerStarted","Data":"d3cc4543b61e25259ad21b1238264a2493c067ecc414c9ee20e5a711e20fe3f4"} Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.425223 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7767cf976c-8m6hn" event={"ID":"de3851c3-345e-41a1-ad9e-ee3f4e357d85","Type":"ContainerStarted","Data":"8a946a4833cfb767bcfbbb40705973681bed85995635fe64826cd54d06ee681d"} Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.425244 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.425259 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.483195 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7767cf976c-8m6hn" podStartSLOduration=13.483176667 podStartE2EDuration="13.483176667s" podCreationTimestamp="2026-01-30 14:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:08:07.464283479 +0000 UTC m=+1498.165631970" watchObservedRunningTime="2026-01-30 14:08:07.483176667 +0000 UTC m=+1498.184525158" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.492265 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.495615 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed8e6fd4-c884-4a5d-8189-3929beafa311-operator-scripts\") pod \"nova-api-db-create-k8j4t\" (UID: \"ed8e6fd4-c884-4a5d-8189-3929beafa311\") " pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.495969 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2x8p\" (UniqueName: \"kubernetes.io/projected/ed8e6fd4-c884-4a5d-8189-3929beafa311-kube-api-access-l2x8p\") pod \"nova-api-db-create-k8j4t\" (UID: \"ed8e6fd4-c884-4a5d-8189-3929beafa311\") " pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.506184 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.523337 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.525504 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.528575 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.532946 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.533159 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.601766 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed8e6fd4-c884-4a5d-8189-3929beafa311-operator-scripts\") pod \"nova-api-db-create-k8j4t\" (UID: \"ed8e6fd4-c884-4a5d-8189-3929beafa311\") " pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.602086 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.602696 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-scripts\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.602790 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj9v7\" (UniqueName: \"kubernetes.io/projected/86bca6e8-77db-4dad-a8d5-3b7718c60688-kube-api-access-bj9v7\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.602891 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-log-httpd\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.603009 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2x8p\" (UniqueName: \"kubernetes.io/projected/ed8e6fd4-c884-4a5d-8189-3929beafa311-kube-api-access-l2x8p\") pod \"nova-api-db-create-k8j4t\" (UID: \"ed8e6fd4-c884-4a5d-8189-3929beafa311\") " pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.603182 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-config-data\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.603330 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-run-httpd\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.603412 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.604883 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed8e6fd4-c884-4a5d-8189-3929beafa311-operator-scripts\") pod \"nova-api-db-create-k8j4t\" (UID: \"ed8e6fd4-c884-4a5d-8189-3929beafa311\") " pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.610178 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-n6kxs"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.612299 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.627354 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-n6kxs"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.660881 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2x8p\" (UniqueName: \"kubernetes.io/projected/ed8e6fd4-c884-4a5d-8189-3929beafa311-kube-api-access-l2x8p\") pod \"nova-api-db-create-k8j4t\" (UID: \"ed8e6fd4-c884-4a5d-8189-3929beafa311\") " pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.682994 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.701294 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-6ttpt"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.702421 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708022 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vktr4\" (UniqueName: \"kubernetes.io/projected/6a263a6b-c717-4bb9-ae46-edfd534e347f-kube-api-access-vktr4\") pod \"nova-cell0-db-create-n6kxs\" (UID: \"6a263a6b-c717-4bb9-ae46-edfd534e347f\") " pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708102 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj9v7\" (UniqueName: \"kubernetes.io/projected/86bca6e8-77db-4dad-a8d5-3b7718c60688-kube-api-access-bj9v7\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708127 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-scripts\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708164 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-log-httpd\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708217 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-config-data\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708260 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a263a6b-c717-4bb9-ae46-edfd534e347f-operator-scripts\") pod \"nova-cell0-db-create-n6kxs\" (UID: \"6a263a6b-c717-4bb9-ae46-edfd534e347f\") " pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708280 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-run-httpd\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708295 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.708376 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.709413 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-log-httpd\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.714741 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.714993 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-run-httpd\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.717705 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-scripts\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.725111 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-config-data\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.732344 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.735518 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj9v7\" (UniqueName: \"kubernetes.io/projected/86bca6e8-77db-4dad-a8d5-3b7718c60688-kube-api-access-bj9v7\") pod \"ceilometer-0\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.765221 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-5737-account-create-update-7wpgl"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.776193 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.780119 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6ttpt"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.784346 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.810038 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vktr4\" (UniqueName: \"kubernetes.io/projected/6a263a6b-c717-4bb9-ae46-edfd534e347f-kube-api-access-vktr4\") pod \"nova-cell0-db-create-n6kxs\" (UID: \"6a263a6b-c717-4bb9-ae46-edfd534e347f\") " pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.810429 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-operator-scripts\") pod \"nova-cell1-db-create-6ttpt\" (UID: \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\") " pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.810555 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a263a6b-c717-4bb9-ae46-edfd534e347f-operator-scripts\") pod \"nova-cell0-db-create-n6kxs\" (UID: \"6a263a6b-c717-4bb9-ae46-edfd534e347f\") " pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.810684 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm9cg\" (UniqueName: \"kubernetes.io/projected/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-kube-api-access-lm9cg\") pod \"nova-cell1-db-create-6ttpt\" (UID: \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\") " pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.811359 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a263a6b-c717-4bb9-ae46-edfd534e347f-operator-scripts\") pod \"nova-cell0-db-create-n6kxs\" (UID: \"6a263a6b-c717-4bb9-ae46-edfd534e347f\") " pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.836344 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5737-account-create-update-7wpgl"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.855402 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.860777 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vktr4\" (UniqueName: \"kubernetes.io/projected/6a263a6b-c717-4bb9-ae46-edfd534e347f-kube-api-access-vktr4\") pod \"nova-cell0-db-create-n6kxs\" (UID: \"6a263a6b-c717-4bb9-ae46-edfd534e347f\") " pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.921169 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-operator-scripts\") pod \"nova-cell1-db-create-6ttpt\" (UID: \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\") " pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.921229 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfvh8\" (UniqueName: \"kubernetes.io/projected/20523849-0caa-42b2-9b52-d5661f90ea95-kube-api-access-nfvh8\") pod \"nova-api-5737-account-create-update-7wpgl\" (UID: \"20523849-0caa-42b2-9b52-d5661f90ea95\") " pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.921270 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20523849-0caa-42b2-9b52-d5661f90ea95-operator-scripts\") pod \"nova-api-5737-account-create-update-7wpgl\" (UID: \"20523849-0caa-42b2-9b52-d5661f90ea95\") " pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.921338 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm9cg\" (UniqueName: \"kubernetes.io/projected/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-kube-api-access-lm9cg\") pod \"nova-cell1-db-create-6ttpt\" (UID: \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\") " pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.922318 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-operator-scripts\") pod \"nova-cell1-db-create-6ttpt\" (UID: \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\") " pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.963478 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.974013 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm9cg\" (UniqueName: \"kubernetes.io/projected/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-kube-api-access-lm9cg\") pod \"nova-cell1-db-create-6ttpt\" (UID: \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\") " pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.992435 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-a772-account-create-update-4n7jm"] Jan 30 14:08:07 crc kubenswrapper[4793]: I0130 14:08:07.993632 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.016368 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a772-account-create-update-4n7jm"] Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.024326 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfvh8\" (UniqueName: \"kubernetes.io/projected/20523849-0caa-42b2-9b52-d5661f90ea95-kube-api-access-nfvh8\") pod \"nova-api-5737-account-create-update-7wpgl\" (UID: \"20523849-0caa-42b2-9b52-d5661f90ea95\") " pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.038648 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20523849-0caa-42b2-9b52-d5661f90ea95-operator-scripts\") pod \"nova-api-5737-account-create-update-7wpgl\" (UID: \"20523849-0caa-42b2-9b52-d5661f90ea95\") " pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.040015 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20523849-0caa-42b2-9b52-d5661f90ea95-operator-scripts\") pod \"nova-api-5737-account-create-update-7wpgl\" (UID: \"20523849-0caa-42b2-9b52-d5661f90ea95\") " pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.028684 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.077206 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfvh8\" (UniqueName: \"kubernetes.io/projected/20523849-0caa-42b2-9b52-d5661f90ea95-kube-api-access-nfvh8\") pod \"nova-api-5737-account-create-update-7wpgl\" (UID: \"20523849-0caa-42b2-9b52-d5661f90ea95\") " pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.144000 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf9xg\" (UniqueName: \"kubernetes.io/projected/aec60191-c8b7-4d7a-a69f-765a9652878b-kube-api-access-zf9xg\") pod \"nova-cell0-a772-account-create-update-4n7jm\" (UID: \"aec60191-c8b7-4d7a-a69f-765a9652878b\") " pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.144125 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec60191-c8b7-4d7a-a69f-765a9652878b-operator-scripts\") pod \"nova-cell0-a772-account-create-update-4n7jm\" (UID: \"aec60191-c8b7-4d7a-a69f-765a9652878b\") " pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.159470 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.160031 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.245625 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf9xg\" (UniqueName: \"kubernetes.io/projected/aec60191-c8b7-4d7a-a69f-765a9652878b-kube-api-access-zf9xg\") pod \"nova-cell0-a772-account-create-update-4n7jm\" (UID: \"aec60191-c8b7-4d7a-a69f-765a9652878b\") " pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.245764 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec60191-c8b7-4d7a-a69f-765a9652878b-operator-scripts\") pod \"nova-cell0-a772-account-create-update-4n7jm\" (UID: \"aec60191-c8b7-4d7a-a69f-765a9652878b\") " pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.246825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec60191-c8b7-4d7a-a69f-765a9652878b-operator-scripts\") pod \"nova-cell0-a772-account-create-update-4n7jm\" (UID: \"aec60191-c8b7-4d7a-a69f-765a9652878b\") " pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.283886 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf9xg\" (UniqueName: \"kubernetes.io/projected/aec60191-c8b7-4d7a-a69f-765a9652878b-kube-api-access-zf9xg\") pod \"nova-cell0-a772-account-create-update-4n7jm\" (UID: \"aec60191-c8b7-4d7a-a69f-765a9652878b\") " pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.385194 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-e189-account-create-update-hp64h"] Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.439624 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.442803 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.479520 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45c782cb-cc45-4785-bdff-d6d9e30389e8" path="/var/lib/kubelet/pods/45c782cb-cc45-4785-bdff-d6d9e30389e8/volumes" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.486225 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e189-account-create-update-hp64h"] Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.505734 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.555873 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-operator-scripts\") pod \"nova-cell1-e189-account-create-update-hp64h\" (UID: \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\") " pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.555942 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fstbs\" (UniqueName: \"kubernetes.io/projected/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-kube-api-access-fstbs\") pod \"nova-cell1-e189-account-create-update-hp64h\" (UID: \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\") " pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.660754 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-operator-scripts\") pod \"nova-cell1-e189-account-create-update-hp64h\" (UID: \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\") " pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.661032 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fstbs\" (UniqueName: \"kubernetes.io/projected/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-kube-api-access-fstbs\") pod \"nova-cell1-e189-account-create-update-hp64h\" (UID: \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\") " pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.662122 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-operator-scripts\") pod \"nova-cell1-e189-account-create-update-hp64h\" (UID: \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\") " pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.683824 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fstbs\" (UniqueName: \"kubernetes.io/projected/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-kube-api-access-fstbs\") pod \"nova-cell1-e189-account-create-update-hp64h\" (UID: \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\") " pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.742515 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-k8j4t"] Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.805303 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:08 crc kubenswrapper[4793]: I0130 14:08:08.929654 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.099869 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-n6kxs"] Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.117932 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6ttpt"] Jan 30 14:08:09 crc kubenswrapper[4793]: W0130 14:08:09.165670 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22f1b95b_bf17_486c_a4b0_0a2aa96cf847.slice/crio-2331bde6c2ddaf7a832e6cb81e2fda29fa6facf6d947a44be7bfcab51ed5c22b WatchSource:0}: Error finding container 2331bde6c2ddaf7a832e6cb81e2fda29fa6facf6d947a44be7bfcab51ed5c22b: Status 404 returned error can't find the container with id 2331bde6c2ddaf7a832e6cb81e2fda29fa6facf6d947a44be7bfcab51ed5c22b Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.554537 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerStarted","Data":"4b73fadc6c8c2f194f24f28709e01df912df317bb62ccab5847b10d6fe6ae833"} Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.569108 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-k8j4t" event={"ID":"ed8e6fd4-c884-4a5d-8189-3929beafa311","Type":"ContainerStarted","Data":"133cf9e3114502e1ed2ef3647567a9a7de600e92d2628121b7ac9be1e2e984c3"} Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.569152 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-k8j4t" event={"ID":"ed8e6fd4-c884-4a5d-8189-3929beafa311","Type":"ContainerStarted","Data":"a273f3836de526e82dca6ed6f42af688cb27feae454dd2f42ce8b2e0b73c5dfa"} Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.581219 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6ttpt" event={"ID":"22f1b95b-bf17-486c-a4b0-0a2aa96cf847","Type":"ContainerStarted","Data":"2331bde6c2ddaf7a832e6cb81e2fda29fa6facf6d947a44be7bfcab51ed5c22b"} Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.596332 4793 generic.go:334] "Generic (PLEG): container finished" podID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerID="031f50784319cac124ddf65fb3b891ec178d8cabb6114ad6fed4b24cfd5aa170" exitCode=0 Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.596404 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5559c03d-3177-4b79-9d5b-4272abb3332c","Type":"ContainerDied","Data":"031f50784319cac124ddf65fb3b891ec178d8cabb6114ad6fed4b24cfd5aa170"} Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.598570 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5737-account-create-update-7wpgl"] Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.606152 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-n6kxs" event={"ID":"6a263a6b-c717-4bb9-ae46-edfd534e347f","Type":"ContainerStarted","Data":"204621118ed93b535a5417e9eb931e17a66ea847b73aaecad338afef5f30ccc1"} Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.622115 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a772-account-create-update-4n7jm"] Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.634324 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.146:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8080: connect: connection refused" Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.640821 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-k8j4t" podStartSLOduration=2.640798034 podStartE2EDuration="2.640798034s" podCreationTimestamp="2026-01-30 14:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:08:09.593822466 +0000 UTC m=+1500.295170957" watchObservedRunningTime="2026-01-30 14:08:09.640798034 +0000 UTC m=+1500.342146525" Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.641791 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:08:09 crc kubenswrapper[4793]: I0130 14:08:09.710955 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e189-account-create-update-hp64h"] Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.108020 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.274871 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-internal-tls-certs\") pod \"5559c03d-3177-4b79-9d5b-4272abb3332c\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.274924 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-config-data\") pod \"5559c03d-3177-4b79-9d5b-4272abb3332c\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.275011 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-combined-ca-bundle\") pod \"5559c03d-3177-4b79-9d5b-4272abb3332c\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.275154 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-logs\") pod \"5559c03d-3177-4b79-9d5b-4272abb3332c\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.275181 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"5559c03d-3177-4b79-9d5b-4272abb3332c\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.275235 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhczv\" (UniqueName: \"kubernetes.io/projected/5559c03d-3177-4b79-9d5b-4272abb3332c-kube-api-access-mhczv\") pod \"5559c03d-3177-4b79-9d5b-4272abb3332c\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.275312 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-scripts\") pod \"5559c03d-3177-4b79-9d5b-4272abb3332c\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.275362 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-httpd-run\") pod \"5559c03d-3177-4b79-9d5b-4272abb3332c\" (UID: \"5559c03d-3177-4b79-9d5b-4272abb3332c\") " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.275905 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-logs" (OuterVolumeSpecName: "logs") pod "5559c03d-3177-4b79-9d5b-4272abb3332c" (UID: "5559c03d-3177-4b79-9d5b-4272abb3332c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.277785 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.281583 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5559c03d-3177-4b79-9d5b-4272abb3332c" (UID: "5559c03d-3177-4b79-9d5b-4272abb3332c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.317631 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "5559c03d-3177-4b79-9d5b-4272abb3332c" (UID: "5559c03d-3177-4b79-9d5b-4272abb3332c"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.343779 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-scripts" (OuterVolumeSpecName: "scripts") pod "5559c03d-3177-4b79-9d5b-4272abb3332c" (UID: "5559c03d-3177-4b79-9d5b-4272abb3332c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.343783 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5559c03d-3177-4b79-9d5b-4272abb3332c-kube-api-access-mhczv" (OuterVolumeSpecName: "kube-api-access-mhczv") pod "5559c03d-3177-4b79-9d5b-4272abb3332c" (UID: "5559c03d-3177-4b79-9d5b-4272abb3332c"). InnerVolumeSpecName "kube-api-access-mhczv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.380476 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.380505 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhczv\" (UniqueName: \"kubernetes.io/projected/5559c03d-3177-4b79-9d5b-4272abb3332c-kube-api-access-mhczv\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.380516 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.380524 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5559c03d-3177-4b79-9d5b-4272abb3332c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.526246 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5559c03d-3177-4b79-9d5b-4272abb3332c" (UID: "5559c03d-3177-4b79-9d5b-4272abb3332c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.529495 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-config-data" (OuterVolumeSpecName: "config-data") pod "5559c03d-3177-4b79-9d5b-4272abb3332c" (UID: "5559c03d-3177-4b79-9d5b-4272abb3332c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.560938 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.563588 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5559c03d-3177-4b79-9d5b-4272abb3332c" (UID: "5559c03d-3177-4b79-9d5b-4272abb3332c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.583791 4793 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.583825 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.583837 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5559c03d-3177-4b79-9d5b-4272abb3332c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.583850 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.646061 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" event={"ID":"aec60191-c8b7-4d7a-a69f-765a9652878b","Type":"ContainerStarted","Data":"2cde16956ce50cc3200c2a37b29cfb6df4e189b94634b0673b55f35da9470b1a"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.646113 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" event={"ID":"aec60191-c8b7-4d7a-a69f-765a9652878b","Type":"ContainerStarted","Data":"90130b1320508cde1497dbb65370a3963dd62f09c528149c60ea7d9a6a45074b"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.667024 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e189-account-create-update-hp64h" event={"ID":"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167","Type":"ContainerStarted","Data":"28e59e6d294030a165a0e0fc52790f5c8159b9e2c9ea4959f3f53fbe499b4fb9"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.667114 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e189-account-create-update-hp64h" event={"ID":"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167","Type":"ContainerStarted","Data":"8c4348c8357b277e9a66ed81f3e268940905c66caa51a7f6288db916158e5349"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.678311 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" podStartSLOduration=3.678292542 podStartE2EDuration="3.678292542s" podCreationTimestamp="2026-01-30 14:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:08:10.67326825 +0000 UTC m=+1501.374616751" watchObservedRunningTime="2026-01-30 14:08:10.678292542 +0000 UTC m=+1501.379641023" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.688867 4793 generic.go:334] "Generic (PLEG): container finished" podID="ed8e6fd4-c884-4a5d-8189-3929beafa311" containerID="133cf9e3114502e1ed2ef3647567a9a7de600e92d2628121b7ac9be1e2e984c3" exitCode=0 Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.688996 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-k8j4t" event={"ID":"ed8e6fd4-c884-4a5d-8189-3929beafa311","Type":"ContainerDied","Data":"133cf9e3114502e1ed2ef3647567a9a7de600e92d2628121b7ac9be1e2e984c3"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.700519 4793 generic.go:334] "Generic (PLEG): container finished" podID="22f1b95b-bf17-486c-a4b0-0a2aa96cf847" containerID="de572dff5d2f58a1803be7f7064305ab032e127eb6c4e1ab6668a1723190ad57" exitCode=0 Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.700626 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6ttpt" event={"ID":"22f1b95b-bf17-486c-a4b0-0a2aa96cf847","Type":"ContainerDied","Data":"de572dff5d2f58a1803be7f7064305ab032e127eb6c4e1ab6668a1723190ad57"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.709028 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5737-account-create-update-7wpgl" event={"ID":"20523849-0caa-42b2-9b52-d5661f90ea95","Type":"ContainerStarted","Data":"3016aa7ef767c45f0d4890b13b4c41ef50790ae3c4b545cc67b0d6c6e822f10c"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.709088 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5737-account-create-update-7wpgl" event={"ID":"20523849-0caa-42b2-9b52-d5661f90ea95","Type":"ContainerStarted","Data":"451da4a93e99f3be95f70ce67765d9ec8492af1c653717ecc19c70a1b959d011"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.712480 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-e189-account-create-update-hp64h" podStartSLOduration=2.712461729 podStartE2EDuration="2.712461729s" podCreationTimestamp="2026-01-30 14:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:08:10.701118515 +0000 UTC m=+1501.402467006" watchObservedRunningTime="2026-01-30 14:08:10.712461729 +0000 UTC m=+1501.413810220" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.727546 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5559c03d-3177-4b79-9d5b-4272abb3332c","Type":"ContainerDied","Data":"70a9907e2896545270e49ea508b4c54cd74205507f20d607e118c4c1d4eb4471"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.727632 4793 scope.go:117] "RemoveContainer" containerID="031f50784319cac124ddf65fb3b891ec178d8cabb6114ad6fed4b24cfd5aa170" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.727824 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.744246 4793 generic.go:334] "Generic (PLEG): container finished" podID="6a263a6b-c717-4bb9-ae46-edfd534e347f" containerID="8dcf35a2124b97e38202260bc4331118f9488517abad0d7a3392779f07bd54b6" exitCode=0 Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.744310 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-n6kxs" event={"ID":"6a263a6b-c717-4bb9-ae46-edfd534e347f","Type":"ContainerDied","Data":"8dcf35a2124b97e38202260bc4331118f9488517abad0d7a3392779f07bd54b6"} Jan 30 14:08:10 crc kubenswrapper[4793]: I0130 14:08:10.834683 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-5737-account-create-update-7wpgl" podStartSLOduration=3.834665729 podStartE2EDuration="3.834665729s" podCreationTimestamp="2026-01-30 14:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:08:10.805114613 +0000 UTC m=+1501.506463124" watchObservedRunningTime="2026-01-30 14:08:10.834665729 +0000 UTC m=+1501.536014210" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.035086 4793 scope.go:117] "RemoveContainer" containerID="dcaeea7ba1cea9514200e8739efe0c1afeee2c3dce2b9b6f14b9679193172dd8" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.151596 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.209356 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.220117 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:08:11 crc kubenswrapper[4793]: E0130 14:08:11.220712 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerName="glance-httpd" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.220739 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerName="glance-httpd" Jan 30 14:08:11 crc kubenswrapper[4793]: E0130 14:08:11.220766 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerName="glance-log" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.220774 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerName="glance-log" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.220985 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerName="glance-log" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.221023 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" containerName="glance-httpd" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.222525 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.231230 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.231481 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.236309 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.325224 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.325311 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f96d1ae8-18a5-4651-b460-21e9ddb50684-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.325340 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r47p5\" (UniqueName: \"kubernetes.io/projected/f96d1ae8-18a5-4651-b460-21e9ddb50684-kube-api-access-r47p5\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.325402 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.325437 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.325462 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.325499 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.325536 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f96d1ae8-18a5-4651-b460-21e9ddb50684-logs\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.429405 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.429473 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f96d1ae8-18a5-4651-b460-21e9ddb50684-logs\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.429632 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.429689 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r47p5\" (UniqueName: \"kubernetes.io/projected/f96d1ae8-18a5-4651-b460-21e9ddb50684-kube-api-access-r47p5\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.429709 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f96d1ae8-18a5-4651-b460-21e9ddb50684-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.429783 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.429818 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.429839 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.430037 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.432035 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f96d1ae8-18a5-4651-b460-21e9ddb50684-logs\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.432532 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f96d1ae8-18a5-4651-b460-21e9ddb50684-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.448455 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.449366 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.450269 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.460606 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f96d1ae8-18a5-4651-b460-21e9ddb50684-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.472805 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r47p5\" (UniqueName: \"kubernetes.io/projected/f96d1ae8-18a5-4651-b460-21e9ddb50684-kube-api-access-r47p5\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.499495 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f96d1ae8-18a5-4651-b460-21e9ddb50684\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.570758 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.768713 4793 generic.go:334] "Generic (PLEG): container finished" podID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerID="640bbc01e45a92a5825f900300d9f0b8086fc19b1ea387177e59aeb60ff48a32" exitCode=0 Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.768774 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9fc5f8f6-nj7xv" event={"ID":"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61","Type":"ContainerDied","Data":"640bbc01e45a92a5825f900300d9f0b8086fc19b1ea387177e59aeb60ff48a32"} Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.769009 4793 scope.go:117] "RemoveContainer" containerID="f596f8243d020ebc541370451531edeb9f8ca985e2b5b436a6b072092db3b9f8" Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.780889 4793 generic.go:334] "Generic (PLEG): container finished" podID="aec60191-c8b7-4d7a-a69f-765a9652878b" containerID="2cde16956ce50cc3200c2a37b29cfb6df4e189b94634b0673b55f35da9470b1a" exitCode=0 Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.781160 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" event={"ID":"aec60191-c8b7-4d7a-a69f-765a9652878b","Type":"ContainerDied","Data":"2cde16956ce50cc3200c2a37b29cfb6df4e189b94634b0673b55f35da9470b1a"} Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.785974 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerStarted","Data":"770623c7f72dcc371d6d0f171741332c80551d1140706f6273b2e8ffc6402658"} Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.788165 4793 generic.go:334] "Generic (PLEG): container finished" podID="8ec3637c-09ef-47f6-bce5-dcc3f4d6e167" containerID="28e59e6d294030a165a0e0fc52790f5c8159b9e2c9ea4959f3f53fbe499b4fb9" exitCode=0 Jan 30 14:08:11 crc kubenswrapper[4793]: I0130 14:08:11.788224 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e189-account-create-update-hp64h" event={"ID":"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167","Type":"ContainerDied","Data":"28e59e6d294030a165a0e0fc52790f5c8159b9e2c9ea4959f3f53fbe499b4fb9"} Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.050855 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-668ffd44cc-lhns4" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.120221 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75bd8998b8-27gd6"] Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.120496 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75bd8998b8-27gd6" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerName="neutron-api" containerID="cri-o://9527fe1780f2fb9cca80bad053f2c7ec761fbbe892d439d87f943245f4fb87c3" gracePeriod=30 Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.120913 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75bd8998b8-27gd6" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerName="neutron-httpd" containerID="cri-o://aa6b97f9cf7eb4c606a580dd2ddef97d729ceaa61803153f00581b30e2022da8" gracePeriod=30 Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.448384 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5559c03d-3177-4b79-9d5b-4272abb3332c" path="/var/lib/kubelet/pods/5559c03d-3177-4b79-9d5b-4272abb3332c/volumes" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.734489 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.749712 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.773650 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.831684 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6ttpt" event={"ID":"22f1b95b-bf17-486c-a4b0-0a2aa96cf847","Type":"ContainerDied","Data":"2331bde6c2ddaf7a832e6cb81e2fda29fa6facf6d947a44be7bfcab51ed5c22b"} Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.831751 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2331bde6c2ddaf7a832e6cb81e2fda29fa6facf6d947a44be7bfcab51ed5c22b" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.831851 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6ttpt" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.852882 4793 generic.go:334] "Generic (PLEG): container finished" podID="20523849-0caa-42b2-9b52-d5661f90ea95" containerID="3016aa7ef767c45f0d4890b13b4c41ef50790ae3c4b545cc67b0d6c6e822f10c" exitCode=0 Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.852975 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5737-account-create-update-7wpgl" event={"ID":"20523849-0caa-42b2-9b52-d5661f90ea95","Type":"ContainerDied","Data":"3016aa7ef767c45f0d4890b13b4c41ef50790ae3c4b545cc67b0d6c6e822f10c"} Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.871599 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-operator-scripts\") pod \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\" (UID: \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\") " Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.871665 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed8e6fd4-c884-4a5d-8189-3929beafa311-operator-scripts\") pod \"ed8e6fd4-c884-4a5d-8189-3929beafa311\" (UID: \"ed8e6fd4-c884-4a5d-8189-3929beafa311\") " Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.871767 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2x8p\" (UniqueName: \"kubernetes.io/projected/ed8e6fd4-c884-4a5d-8189-3929beafa311-kube-api-access-l2x8p\") pod \"ed8e6fd4-c884-4a5d-8189-3929beafa311\" (UID: \"ed8e6fd4-c884-4a5d-8189-3929beafa311\") " Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.871849 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a263a6b-c717-4bb9-ae46-edfd534e347f-operator-scripts\") pod \"6a263a6b-c717-4bb9-ae46-edfd534e347f\" (UID: \"6a263a6b-c717-4bb9-ae46-edfd534e347f\") " Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.871933 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm9cg\" (UniqueName: \"kubernetes.io/projected/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-kube-api-access-lm9cg\") pod \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\" (UID: \"22f1b95b-bf17-486c-a4b0-0a2aa96cf847\") " Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.871987 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vktr4\" (UniqueName: \"kubernetes.io/projected/6a263a6b-c717-4bb9-ae46-edfd534e347f-kube-api-access-vktr4\") pod \"6a263a6b-c717-4bb9-ae46-edfd534e347f\" (UID: \"6a263a6b-c717-4bb9-ae46-edfd534e347f\") " Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.880360 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22f1b95b-bf17-486c-a4b0-0a2aa96cf847" (UID: "22f1b95b-bf17-486c-a4b0-0a2aa96cf847"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.880780 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a263a6b-c717-4bb9-ae46-edfd534e347f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a263a6b-c717-4bb9-ae46-edfd534e347f" (UID: "6a263a6b-c717-4bb9-ae46-edfd534e347f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.881319 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed8e6fd4-c884-4a5d-8189-3929beafa311-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed8e6fd4-c884-4a5d-8189-3929beafa311" (UID: "ed8e6fd4-c884-4a5d-8189-3929beafa311"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.887187 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a263a6b-c717-4bb9-ae46-edfd534e347f-kube-api-access-vktr4" (OuterVolumeSpecName: "kube-api-access-vktr4") pod "6a263a6b-c717-4bb9-ae46-edfd534e347f" (UID: "6a263a6b-c717-4bb9-ae46-edfd534e347f"). InnerVolumeSpecName "kube-api-access-vktr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.890317 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed8e6fd4-c884-4a5d-8189-3929beafa311-kube-api-access-l2x8p" (OuterVolumeSpecName: "kube-api-access-l2x8p") pod "ed8e6fd4-c884-4a5d-8189-3929beafa311" (UID: "ed8e6fd4-c884-4a5d-8189-3929beafa311"). InnerVolumeSpecName "kube-api-access-l2x8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.892885 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-kube-api-access-lm9cg" (OuterVolumeSpecName: "kube-api-access-lm9cg") pod "22f1b95b-bf17-486c-a4b0-0a2aa96cf847" (UID: "22f1b95b-bf17-486c-a4b0-0a2aa96cf847"). InnerVolumeSpecName "kube-api-access-lm9cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.892978 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-n6kxs" event={"ID":"6a263a6b-c717-4bb9-ae46-edfd534e347f","Type":"ContainerDied","Data":"204621118ed93b535a5417e9eb931e17a66ea847b73aaecad338afef5f30ccc1"} Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.893015 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="204621118ed93b535a5417e9eb931e17a66ea847b73aaecad338afef5f30ccc1" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.893097 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-n6kxs" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.938348 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.944585 4793 generic.go:334] "Generic (PLEG): container finished" podID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerID="aa6b97f9cf7eb4c606a580dd2ddef97d729ceaa61803153f00581b30e2022da8" exitCode=0 Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.944668 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd8998b8-27gd6" event={"ID":"e26816b7-89ad-4885-b481-3ae7a8ab90c4","Type":"ContainerDied","Data":"aa6b97f9cf7eb4c606a580dd2ddef97d729ceaa61803153f00581b30e2022da8"} Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.966306 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9fc5f8f6-nj7xv" event={"ID":"7c37d49c-cbd6-47d6-8f29-51ec6fac2f61","Type":"ContainerStarted","Data":"d2335cce21b11d1ab56e3ad35e0c55bce3cf69e2db057d909aa07232df9135ae"} Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.986603 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.987092 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed8e6fd4-c884-4a5d-8189-3929beafa311-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.987187 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2x8p\" (UniqueName: \"kubernetes.io/projected/ed8e6fd4-c884-4a5d-8189-3929beafa311-kube-api-access-l2x8p\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.987282 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a263a6b-c717-4bb9-ae46-edfd534e347f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.987380 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm9cg\" (UniqueName: \"kubernetes.io/projected/22f1b95b-bf17-486c-a4b0-0a2aa96cf847-kube-api-access-lm9cg\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.987459 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vktr4\" (UniqueName: \"kubernetes.io/projected/6a263a6b-c717-4bb9-ae46-edfd534e347f-kube-api-access-vktr4\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:12 crc kubenswrapper[4793]: I0130 14:08:12.992112 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-k8j4t" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:12.993416 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-k8j4t" event={"ID":"ed8e6fd4-c884-4a5d-8189-3929beafa311","Type":"ContainerDied","Data":"a273f3836de526e82dca6ed6f42af688cb27feae454dd2f42ce8b2e0b73c5dfa"} Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.018927 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a273f3836de526e82dca6ed6f42af688cb27feae454dd2f42ce8b2e0b73c5dfa" Jan 30 14:08:13 crc kubenswrapper[4793]: E0130 14:08:13.108246 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode26816b7_89ad_4885_b481_3ae7a8ab90c4.slice/crio-aa6b97f9cf7eb4c606a580dd2ddef97d729ceaa61803153f00581b30e2022da8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a263a6b_c717_4bb9_ae46_edfd534e347f.slice/crio-204621118ed93b535a5417e9eb931e17a66ea847b73aaecad338afef5f30ccc1\": RecentStats: unable to find data in memory cache]" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.580629 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.728447 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf9xg\" (UniqueName: \"kubernetes.io/projected/aec60191-c8b7-4d7a-a69f-765a9652878b-kube-api-access-zf9xg\") pod \"aec60191-c8b7-4d7a-a69f-765a9652878b\" (UID: \"aec60191-c8b7-4d7a-a69f-765a9652878b\") " Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.728802 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec60191-c8b7-4d7a-a69f-765a9652878b-operator-scripts\") pod \"aec60191-c8b7-4d7a-a69f-765a9652878b\" (UID: \"aec60191-c8b7-4d7a-a69f-765a9652878b\") " Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.730128 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec60191-c8b7-4d7a-a69f-765a9652878b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aec60191-c8b7-4d7a-a69f-765a9652878b" (UID: "aec60191-c8b7-4d7a-a69f-765a9652878b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.741324 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec60191-c8b7-4d7a-a69f-765a9652878b-kube-api-access-zf9xg" (OuterVolumeSpecName: "kube-api-access-zf9xg") pod "aec60191-c8b7-4d7a-a69f-765a9652878b" (UID: "aec60191-c8b7-4d7a-a69f-765a9652878b"). InnerVolumeSpecName "kube-api-access-zf9xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.833335 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf9xg\" (UniqueName: \"kubernetes.io/projected/aec60191-c8b7-4d7a-a69f-765a9652878b-kube-api-access-zf9xg\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.833358 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec60191-c8b7-4d7a-a69f-765a9652878b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.848266 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.934542 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-operator-scripts\") pod \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\" (UID: \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\") " Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.934624 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fstbs\" (UniqueName: \"kubernetes.io/projected/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-kube-api-access-fstbs\") pod \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\" (UID: \"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167\") " Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.935066 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ec3637c-09ef-47f6-bce5-dcc3f4d6e167" (UID: "8ec3637c-09ef-47f6-bce5-dcc3f4d6e167"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.938574 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-kube-api-access-fstbs" (OuterVolumeSpecName: "kube-api-access-fstbs") pod "8ec3637c-09ef-47f6-bce5-dcc3f4d6e167" (UID: "8ec3637c-09ef-47f6-bce5-dcc3f4d6e167"). InnerVolumeSpecName "kube-api-access-fstbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.941456 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:13 crc kubenswrapper[4793]: I0130 14:08:13.941490 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fstbs\" (UniqueName: \"kubernetes.io/projected/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167-kube-api-access-fstbs\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.025811 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerStarted","Data":"f455b4d10e53f36a56989caad1569b935b4a6126cea9aa339351b0f9175fbebd"} Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.028469 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e189-account-create-update-hp64h" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.028485 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e189-account-create-update-hp64h" event={"ID":"8ec3637c-09ef-47f6-bce5-dcc3f4d6e167","Type":"ContainerDied","Data":"8c4348c8357b277e9a66ed81f3e268940905c66caa51a7f6288db916158e5349"} Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.028530 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c4348c8357b277e9a66ed81f3e268940905c66caa51a7f6288db916158e5349" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.038255 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f96d1ae8-18a5-4651-b460-21e9ddb50684","Type":"ContainerStarted","Data":"01ddeb32f879e43a83e42f0d24ceaef2dc5cfaaf6a7650ad4d71889356b2adab"} Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.045995 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.046171 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a772-account-create-update-4n7jm" event={"ID":"aec60191-c8b7-4d7a-a69f-765a9652878b","Type":"ContainerDied","Data":"90130b1320508cde1497dbb65370a3963dd62f09c528149c60ea7d9a6a45074b"} Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.046242 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90130b1320508cde1497dbb65370a3963dd62f09c528149c60ea7d9a6a45074b" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.511954 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.592493 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.592974 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7767cf976c-8m6hn" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.661949 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20523849-0caa-42b2-9b52-d5661f90ea95-operator-scripts\") pod \"20523849-0caa-42b2-9b52-d5661f90ea95\" (UID: \"20523849-0caa-42b2-9b52-d5661f90ea95\") " Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.662006 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfvh8\" (UniqueName: \"kubernetes.io/projected/20523849-0caa-42b2-9b52-d5661f90ea95-kube-api-access-nfvh8\") pod \"20523849-0caa-42b2-9b52-d5661f90ea95\" (UID: \"20523849-0caa-42b2-9b52-d5661f90ea95\") " Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.663452 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20523849-0caa-42b2-9b52-d5661f90ea95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20523849-0caa-42b2-9b52-d5661f90ea95" (UID: "20523849-0caa-42b2-9b52-d5661f90ea95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.670427 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20523849-0caa-42b2-9b52-d5661f90ea95-kube-api-access-nfvh8" (OuterVolumeSpecName: "kube-api-access-nfvh8") pod "20523849-0caa-42b2-9b52-d5661f90ea95" (UID: "20523849-0caa-42b2-9b52-d5661f90ea95"). InnerVolumeSpecName "kube-api-access-nfvh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.768248 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20523849-0caa-42b2-9b52-d5661f90ea95-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:14 crc kubenswrapper[4793]: I0130 14:08:14.768287 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfvh8\" (UniqueName: \"kubernetes.io/projected/20523849-0caa-42b2-9b52-d5661f90ea95-kube-api-access-nfvh8\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:15 crc kubenswrapper[4793]: I0130 14:08:15.069532 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5737-account-create-update-7wpgl" Jan 30 14:08:15 crc kubenswrapper[4793]: I0130 14:08:15.069872 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5737-account-create-update-7wpgl" event={"ID":"20523849-0caa-42b2-9b52-d5661f90ea95","Type":"ContainerDied","Data":"451da4a93e99f3be95f70ce67765d9ec8492af1c653717ecc19c70a1b959d011"} Jan 30 14:08:15 crc kubenswrapper[4793]: I0130 14:08:15.069906 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="451da4a93e99f3be95f70ce67765d9ec8492af1c653717ecc19c70a1b959d011" Jan 30 14:08:15 crc kubenswrapper[4793]: I0130 14:08:15.081902 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerStarted","Data":"952ea4bae6adab4daa0b82fc192ab0083da34e2f73d1e17c743c0bc6a664325e"} Jan 30 14:08:15 crc kubenswrapper[4793]: I0130 14:08:15.092585 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f96d1ae8-18a5-4651-b460-21e9ddb50684","Type":"ContainerStarted","Data":"c59f9359bb100a7aec824b49a32eebb8648ff9a075e46ec6df4a5884b0447749"} Jan 30 14:08:16 crc kubenswrapper[4793]: I0130 14:08:16.101867 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f96d1ae8-18a5-4651-b460-21e9ddb50684","Type":"ContainerStarted","Data":"d031e9f6d658416bd44e51043a5059246e656d8e514d5c5e93d5efdadd7f1105"} Jan 30 14:08:16 crc kubenswrapper[4793]: I0130 14:08:16.125671 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.125648365 podStartE2EDuration="5.125648365s" podCreationTimestamp="2026-01-30 14:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:08:16.121108206 +0000 UTC m=+1506.822456707" watchObservedRunningTime="2026-01-30 14:08:16.125648365 +0000 UTC m=+1506.826996856" Jan 30 14:08:16 crc kubenswrapper[4793]: I0130 14:08:16.747075 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:08:16 crc kubenswrapper[4793]: I0130 14:08:16.747591 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-log" containerID="cri-o://d6909ec1b1d6acd6ea51f39341116d0dc581b2cb648e5824a50f0830c242d28c" gracePeriod=30 Jan 30 14:08:16 crc kubenswrapper[4793]: I0130 14:08:16.748007 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-httpd" containerID="cri-o://7fcd99ccac2b000f72be7038dcce1804ca999ec354f3fa50a7ce90a221f56951" gracePeriod=30 Jan 30 14:08:17 crc kubenswrapper[4793]: I0130 14:08:17.110268 4793 generic.go:334] "Generic (PLEG): container finished" podID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerID="d6909ec1b1d6acd6ea51f39341116d0dc581b2cb648e5824a50f0830c242d28c" exitCode=143 Jan 30 14:08:17 crc kubenswrapper[4793]: I0130 14:08:17.110356 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afd812b0-55db-4cff-b0cd-4b18afe5a4be","Type":"ContainerDied","Data":"d6909ec1b1d6acd6ea51f39341116d0dc581b2cb648e5824a50f0830c242d28c"} Jan 30 14:08:17 crc kubenswrapper[4793]: I0130 14:08:17.113226 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerStarted","Data":"78a1272f5a0efb9c0f9952508ceaecc1543daf837224cceb68be086ddee0cdbe"} Jan 30 14:08:17 crc kubenswrapper[4793]: I0130 14:08:17.113398 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 14:08:17 crc kubenswrapper[4793]: I0130 14:08:17.152206 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.581936098 podStartE2EDuration="10.152189828s" podCreationTimestamp="2026-01-30 14:08:07 +0000 UTC" firstStartedPulling="2026-01-30 14:08:08.95110451 +0000 UTC m=+1499.652453001" lastFinishedPulling="2026-01-30 14:08:16.52135824 +0000 UTC m=+1507.222706731" observedRunningTime="2026-01-30 14:08:17.146908451 +0000 UTC m=+1507.848256942" watchObservedRunningTime="2026-01-30 14:08:17.152189828 +0000 UTC m=+1507.853538319" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.292178 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w8lcj"] Jan 30 14:08:18 crc kubenswrapper[4793]: E0130 14:08:18.292914 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a263a6b-c717-4bb9-ae46-edfd534e347f" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.292928 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a263a6b-c717-4bb9-ae46-edfd534e347f" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: E0130 14:08:18.292945 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec60191-c8b7-4d7a-a69f-765a9652878b" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.292954 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec60191-c8b7-4d7a-a69f-765a9652878b" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: E0130 14:08:18.292965 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8e6fd4-c884-4a5d-8189-3929beafa311" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.292972 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8e6fd4-c884-4a5d-8189-3929beafa311" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: E0130 14:08:18.293002 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec3637c-09ef-47f6-bce5-dcc3f4d6e167" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293010 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec3637c-09ef-47f6-bce5-dcc3f4d6e167" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: E0130 14:08:18.293023 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20523849-0caa-42b2-9b52-d5661f90ea95" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293030 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="20523849-0caa-42b2-9b52-d5661f90ea95" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: E0130 14:08:18.293041 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22f1b95b-bf17-486c-a4b0-0a2aa96cf847" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293062 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="22f1b95b-bf17-486c-a4b0-0a2aa96cf847" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293249 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec3637c-09ef-47f6-bce5-dcc3f4d6e167" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293273 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec60191-c8b7-4d7a-a69f-765a9652878b" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293287 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed8e6fd4-c884-4a5d-8189-3929beafa311" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293303 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="22f1b95b-bf17-486c-a4b0-0a2aa96cf847" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293312 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="20523849-0caa-42b2-9b52-d5661f90ea95" containerName="mariadb-account-create-update" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.293328 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a263a6b-c717-4bb9-ae46-edfd534e347f" containerName="mariadb-database-create" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.294137 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.297149 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.297548 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.297707 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rgtrf" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.309305 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w8lcj"] Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.435833 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.436121 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-scripts\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.436345 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-config-data\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.436437 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xntcf\" (UniqueName: \"kubernetes.io/projected/4ba071cd-0f26-432d-809e-709cad1a1e64-kube-api-access-xntcf\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.538451 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-scripts\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.538583 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-config-data\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.538647 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xntcf\" (UniqueName: \"kubernetes.io/projected/4ba071cd-0f26-432d-809e-709cad1a1e64-kube-api-access-xntcf\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.538737 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.576259 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-config-data\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.580582 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-scripts\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.581123 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.582591 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xntcf\" (UniqueName: \"kubernetes.io/projected/4ba071cd-0f26-432d-809e-709cad1a1e64-kube-api-access-xntcf\") pod \"nova-cell0-conductor-db-sync-w8lcj\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:18 crc kubenswrapper[4793]: I0130 14:08:18.633372 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:08:19 crc kubenswrapper[4793]: I0130 14:08:19.238855 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w8lcj"] Jan 30 14:08:19 crc kubenswrapper[4793]: I0130 14:08:19.608855 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.146:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8080: connect: connection refused" Jan 30 14:08:19 crc kubenswrapper[4793]: I0130 14:08:19.831875 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:08:19 crc kubenswrapper[4793]: I0130 14:08:19.831973 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.179371 4793 generic.go:334] "Generic (PLEG): container finished" podID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerID="9527fe1780f2fb9cca80bad053f2c7ec761fbbe892d439d87f943245f4fb87c3" exitCode=0 Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.179427 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd8998b8-27gd6" event={"ID":"e26816b7-89ad-4885-b481-3ae7a8ab90c4","Type":"ContainerDied","Data":"9527fe1780f2fb9cca80bad053f2c7ec761fbbe892d439d87f943245f4fb87c3"} Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.179454 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75bd8998b8-27gd6" event={"ID":"e26816b7-89ad-4885-b481-3ae7a8ab90c4","Type":"ContainerDied","Data":"0c2d21afdba7970d61ae9dcca3d44a8ee8d119daf524bd616f6bfe333ace90f3"} Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.179465 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c2d21afdba7970d61ae9dcca3d44a8ee8d119daf524bd616f6bfe333ace90f3" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.197975 4793 generic.go:334] "Generic (PLEG): container finished" podID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerID="7fcd99ccac2b000f72be7038dcce1804ca999ec354f3fa50a7ce90a221f56951" exitCode=0 Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.198037 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afd812b0-55db-4cff-b0cd-4b18afe5a4be","Type":"ContainerDied","Data":"7fcd99ccac2b000f72be7038dcce1804ca999ec354f3fa50a7ce90a221f56951"} Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.212068 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" event={"ID":"4ba071cd-0f26-432d-809e-709cad1a1e64","Type":"ContainerStarted","Data":"10458f2044a1485dd49f34389e009c76947a11228dc091b7963498c198351281"} Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.242801 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.417810 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc2r7\" (UniqueName: \"kubernetes.io/projected/e26816b7-89ad-4885-b481-3ae7a8ab90c4-kube-api-access-vc2r7\") pod \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.417868 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-config\") pod \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.417991 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-combined-ca-bundle\") pod \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.418014 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-httpd-config\") pod \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.418030 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-ovndb-tls-certs\") pod \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\" (UID: \"e26816b7-89ad-4885-b481-3ae7a8ab90c4\") " Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.428591 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e26816b7-89ad-4885-b481-3ae7a8ab90c4" (UID: "e26816b7-89ad-4885-b481-3ae7a8ab90c4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.445235 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e26816b7-89ad-4885-b481-3ae7a8ab90c4-kube-api-access-vc2r7" (OuterVolumeSpecName: "kube-api-access-vc2r7") pod "e26816b7-89ad-4885-b481-3ae7a8ab90c4" (UID: "e26816b7-89ad-4885-b481-3ae7a8ab90c4"). InnerVolumeSpecName "kube-api-access-vc2r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.523238 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.523486 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc2r7\" (UniqueName: \"kubernetes.io/projected/e26816b7-89ad-4885-b481-3ae7a8ab90c4-kube-api-access-vc2r7\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.579505 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-config" (OuterVolumeSpecName: "config") pod "e26816b7-89ad-4885-b481-3ae7a8ab90c4" (UID: "e26816b7-89ad-4885-b481-3ae7a8ab90c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.590825 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e26816b7-89ad-4885-b481-3ae7a8ab90c4" (UID: "e26816b7-89ad-4885-b481-3ae7a8ab90c4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.611340 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e26816b7-89ad-4885-b481-3ae7a8ab90c4" (UID: "e26816b7-89ad-4885-b481-3ae7a8ab90c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.624790 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.624826 4793 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:20 crc kubenswrapper[4793]: I0130 14:08:20.624836 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e26816b7-89ad-4885-b481-3ae7a8ab90c4-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.024013 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.137556 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.137605 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-logs\") pod \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.137628 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-combined-ca-bundle\") pod \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.137656 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44tdd\" (UniqueName: \"kubernetes.io/projected/afd812b0-55db-4cff-b0cd-4b18afe5a4be-kube-api-access-44tdd\") pod \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.137674 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-public-tls-certs\") pod \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.137697 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-httpd-run\") pod \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.137752 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-scripts\") pod \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.137797 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-config-data\") pod \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\" (UID: \"afd812b0-55db-4cff-b0cd-4b18afe5a4be\") " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.138105 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-logs" (OuterVolumeSpecName: "logs") pod "afd812b0-55db-4cff-b0cd-4b18afe5a4be" (UID: "afd812b0-55db-4cff-b0cd-4b18afe5a4be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.138395 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.138468 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "afd812b0-55db-4cff-b0cd-4b18afe5a4be" (UID: "afd812b0-55db-4cff-b0cd-4b18afe5a4be"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.151577 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-scripts" (OuterVolumeSpecName: "scripts") pod "afd812b0-55db-4cff-b0cd-4b18afe5a4be" (UID: "afd812b0-55db-4cff-b0cd-4b18afe5a4be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.152901 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "afd812b0-55db-4cff-b0cd-4b18afe5a4be" (UID: "afd812b0-55db-4cff-b0cd-4b18afe5a4be"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.153835 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd812b0-55db-4cff-b0cd-4b18afe5a4be-kube-api-access-44tdd" (OuterVolumeSpecName: "kube-api-access-44tdd") pod "afd812b0-55db-4cff-b0cd-4b18afe5a4be" (UID: "afd812b0-55db-4cff-b0cd-4b18afe5a4be"). InnerVolumeSpecName "kube-api-access-44tdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.232450 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afd812b0-55db-4cff-b0cd-4b18afe5a4be" (UID: "afd812b0-55db-4cff-b0cd-4b18afe5a4be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.240708 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.240732 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.240743 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44tdd\" (UniqueName: \"kubernetes.io/projected/afd812b0-55db-4cff-b0cd-4b18afe5a4be-kube-api-access-44tdd\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.240752 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afd812b0-55db-4cff-b0cd-4b18afe5a4be-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.240760 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.271917 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "afd812b0-55db-4cff-b0cd-4b18afe5a4be" (UID: "afd812b0-55db-4cff-b0cd-4b18afe5a4be"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.272663 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75bd8998b8-27gd6" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.272757 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.272771 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afd812b0-55db-4cff-b0cd-4b18afe5a4be","Type":"ContainerDied","Data":"2863a64e0737f90ead25e88cb3e95128501f7112f292e0e206879eebe7f45380"} Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.274616 4793 scope.go:117] "RemoveContainer" containerID="7fcd99ccac2b000f72be7038dcce1804ca999ec354f3fa50a7ce90a221f56951" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.295172 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.298693 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-config-data" (OuterVolumeSpecName: "config-data") pod "afd812b0-55db-4cff-b0cd-4b18afe5a4be" (UID: "afd812b0-55db-4cff-b0cd-4b18afe5a4be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.345651 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.345686 4793 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.345699 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd812b0-55db-4cff-b0cd-4b18afe5a4be-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.348358 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75bd8998b8-27gd6"] Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.359671 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-75bd8998b8-27gd6"] Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.367247 4793 scope.go:117] "RemoveContainer" containerID="d6909ec1b1d6acd6ea51f39341116d0dc581b2cb648e5824a50f0830c242d28c" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.573083 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.573265 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.638203 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.639526 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.654725 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.661749 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:08:21 crc kubenswrapper[4793]: E0130 14:08:21.662283 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerName="neutron-httpd" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.662364 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerName="neutron-httpd" Jan 30 14:08:21 crc kubenswrapper[4793]: E0130 14:08:21.662552 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-log" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.662616 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-log" Jan 30 14:08:21 crc kubenswrapper[4793]: E0130 14:08:21.662681 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerName="neutron-api" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.662742 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerName="neutron-api" Jan 30 14:08:21 crc kubenswrapper[4793]: E0130 14:08:21.662804 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-httpd" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.662857 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-httpd" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.663099 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerName="neutron-api" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.663193 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" containerName="neutron-httpd" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.663271 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-log" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.668215 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-httpd" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.669538 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.669618 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.680195 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.680502 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.699686 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.756344 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzzjn\" (UniqueName: \"kubernetes.io/projected/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-kube-api-access-tzzjn\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.756414 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.756476 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-logs\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.756537 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.756569 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.756594 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.756646 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.756680 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.858792 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.859455 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.859564 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.859670 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.859777 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.859872 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.859974 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzzjn\" (UniqueName: \"kubernetes.io/projected/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-kube-api-access-tzzjn\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.860109 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.860291 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-logs\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.860780 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-logs\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.860913 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.871947 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.876762 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.877299 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.877884 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.909451 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzzjn\" (UniqueName: \"kubernetes.io/projected/ae7d1df8-4b0f-46f7-85f4-e24fd65a919d-kube-api-access-tzzjn\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.916686 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d\") " pod="openstack/glance-default-external-api-0" Jan 30 14:08:21 crc kubenswrapper[4793]: I0130 14:08:21.995079 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:08:22 crc kubenswrapper[4793]: I0130 14:08:22.302632 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:22 crc kubenswrapper[4793]: I0130 14:08:22.302822 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:22 crc kubenswrapper[4793]: I0130 14:08:22.412204 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" path="/var/lib/kubelet/pods/afd812b0-55db-4cff-b0cd-4b18afe5a4be/volumes" Jan 30 14:08:22 crc kubenswrapper[4793]: I0130 14:08:22.412942 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e26816b7-89ad-4885-b481-3ae7a8ab90c4" path="/var/lib/kubelet/pods/e26816b7-89ad-4885-b481-3ae7a8ab90c4/volumes" Jan 30 14:08:22 crc kubenswrapper[4793]: I0130 14:08:22.663349 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:08:22 crc kubenswrapper[4793]: W0130 14:08:22.692015 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae7d1df8_4b0f_46f7_85f4_e24fd65a919d.slice/crio-ce529dcdfc33186c49ecf9563fad5a69751d64f831df668df9d9047337f8e416 WatchSource:0}: Error finding container ce529dcdfc33186c49ecf9563fad5a69751d64f831df668df9d9047337f8e416: Status 404 returned error can't find the container with id ce529dcdfc33186c49ecf9563fad5a69751d64f831df668df9d9047337f8e416 Jan 30 14:08:23 crc kubenswrapper[4793]: I0130 14:08:23.321491 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7","Type":"ContainerStarted","Data":"96b175898b6a8155cc9b6df77597096c4715b37a8b44b1616f769e51e1320186"} Jan 30 14:08:23 crc kubenswrapper[4793]: I0130 14:08:23.329551 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d","Type":"ContainerStarted","Data":"ce529dcdfc33186c49ecf9563fad5a69751d64f831df668df9d9047337f8e416"} Jan 30 14:08:23 crc kubenswrapper[4793]: I0130 14:08:23.352906 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.459793035 podStartE2EDuration="38.352885697s" podCreationTimestamp="2026-01-30 14:07:45 +0000 UTC" firstStartedPulling="2026-01-30 14:07:46.69780082 +0000 UTC m=+1477.399149311" lastFinishedPulling="2026-01-30 14:08:22.590893482 +0000 UTC m=+1513.292241973" observedRunningTime="2026-01-30 14:08:23.339475793 +0000 UTC m=+1514.040824284" watchObservedRunningTime="2026-01-30 14:08:23.352885697 +0000 UTC m=+1514.054234188" Jan 30 14:08:24 crc kubenswrapper[4793]: I0130 14:08:24.341618 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d","Type":"ContainerStarted","Data":"503f96a09eca509027938ff0c9d0ac2065d3fbcd11bc7f66eb0d6e55bd0de7ba"} Jan 30 14:08:25 crc kubenswrapper[4793]: I0130 14:08:25.370777 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae7d1df8-4b0f-46f7-85f4-e24fd65a919d","Type":"ContainerStarted","Data":"15d75e3810a861b7bbc46e5562fd9f0ed5fc04b9db54a0f610d1e8824d83ad3f"} Jan 30 14:08:27 crc kubenswrapper[4793]: I0130 14:08:27.415866 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:27 crc kubenswrapper[4793]: I0130 14:08:27.416936 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:08:27 crc kubenswrapper[4793]: I0130 14:08:27.429040 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 14:08:27 crc kubenswrapper[4793]: I0130 14:08:27.444095 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.444075945 podStartE2EDuration="6.444075945s" podCreationTimestamp="2026-01-30 14:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:08:25.398926712 +0000 UTC m=+1516.100275203" watchObservedRunningTime="2026-01-30 14:08:27.444075945 +0000 UTC m=+1518.145424436" Jan 30 14:08:28 crc kubenswrapper[4793]: I0130 14:08:28.419450 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerDied","Data":"e1ee447c1da4c22c8a8e3defd94a820c3fc867c7dfc1d7bd5bb248fe0d49e002"} Jan 30 14:08:28 crc kubenswrapper[4793]: I0130 14:08:28.419771 4793 scope.go:117] "RemoveContainer" containerID="1a0edd78ac934a217d77619cfa86e0fdb058839606603994d0152ae52ba43266" Jan 30 14:08:28 crc kubenswrapper[4793]: I0130 14:08:28.420272 4793 generic.go:334] "Generic (PLEG): container finished" podID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerID="e1ee447c1da4c22c8a8e3defd94a820c3fc867c7dfc1d7bd5bb248fe0d49e002" exitCode=1 Jan 30 14:08:28 crc kubenswrapper[4793]: I0130 14:08:28.420502 4793 scope.go:117] "RemoveContainer" containerID="e1ee447c1da4c22c8a8e3defd94a820c3fc867c7dfc1d7bd5bb248fe0d49e002" Jan 30 14:08:28 crc kubenswrapper[4793]: E0130 14:08:28.420704 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-6b66cd9fcf-c94kp_openstack(ecab991a-220f-4b09-a1fa-f43fef3d0be5)\"" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" Jan 30 14:08:29 crc kubenswrapper[4793]: I0130 14:08:29.609236 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:08:29 crc kubenswrapper[4793]: I0130 14:08:29.609299 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:08:29 crc kubenswrapper[4793]: I0130 14:08:29.610158 4793 scope.go:117] "RemoveContainer" containerID="e1ee447c1da4c22c8a8e3defd94a820c3fc867c7dfc1d7bd5bb248fe0d49e002" Jan 30 14:08:29 crc kubenswrapper[4793]: E0130 14:08:29.610403 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-6b66cd9fcf-c94kp_openstack(ecab991a-220f-4b09-a1fa-f43fef3d0be5)\"" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" Jan 30 14:08:29 crc kubenswrapper[4793]: I0130 14:08:29.839413 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 14:08:31 crc kubenswrapper[4793]: I0130 14:08:31.995502 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 14:08:31 crc kubenswrapper[4793]: I0130 14:08:31.995842 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 14:08:32 crc kubenswrapper[4793]: I0130 14:08:32.134275 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 14:08:32 crc kubenswrapper[4793]: I0130 14:08:32.134748 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 14:08:32 crc kubenswrapper[4793]: I0130 14:08:32.461268 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 14:08:32 crc kubenswrapper[4793]: I0130 14:08:32.461468 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 14:08:38 crc kubenswrapper[4793]: I0130 14:08:38.168630 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 14:08:38 crc kubenswrapper[4793]: E0130 14:08:38.495905 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Jan 30 14:08:38 crc kubenswrapper[4793]: E0130 14:08:38.496301 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xntcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-w8lcj_openstack(4ba071cd-0f26-432d-809e-709cad1a1e64): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:08:38 crc kubenswrapper[4793]: E0130 14:08:38.498183 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" podUID="4ba071cd-0f26-432d-809e-709cad1a1e64" Jan 30 14:08:38 crc kubenswrapper[4793]: E0130 14:08:38.665423 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" podUID="4ba071cd-0f26-432d-809e-709cad1a1e64" Jan 30 14:08:39 crc kubenswrapper[4793]: I0130 14:08:39.831978 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9fc5f8f6-nj7xv" podUID="7c37d49c-cbd6-47d6-8f29-51ec6fac2f61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.018842 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.018968 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.026131 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.300841 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.301607 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="sg-core" containerID="cri-o://952ea4bae6adab4daa0b82fc192ab0083da34e2f73d1e17c743c0bc6a664325e" gracePeriod=30 Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.301626 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="ceilometer-notification-agent" containerID="cri-o://f455b4d10e53f36a56989caad1569b935b4a6126cea9aa339351b0f9175fbebd" gracePeriod=30 Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.301626 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="proxy-httpd" containerID="cri-o://78a1272f5a0efb9c0f9952508ceaecc1543daf837224cceb68be086ddee0cdbe" gracePeriod=30 Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.301939 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="ceilometer-central-agent" containerID="cri-o://770623c7f72dcc371d6d0f171741332c80551d1140706f6273b2e8ffc6402658" gracePeriod=30 Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.547328 4793 generic.go:334] "Generic (PLEG): container finished" podID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerID="952ea4bae6adab4daa0b82fc192ab0083da34e2f73d1e17c743c0bc6a664325e" exitCode=2 Jan 30 14:08:40 crc kubenswrapper[4793]: I0130 14:08:40.547369 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerDied","Data":"952ea4bae6adab4daa0b82fc192ab0083da34e2f73d1e17c743c0bc6a664325e"} Jan 30 14:08:41 crc kubenswrapper[4793]: I0130 14:08:41.398929 4793 scope.go:117] "RemoveContainer" containerID="e1ee447c1da4c22c8a8e3defd94a820c3fc867c7dfc1d7bd5bb248fe0d49e002" Jan 30 14:08:41 crc kubenswrapper[4793]: E0130 14:08:41.400478 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-6b66cd9fcf-c94kp_openstack(ecab991a-220f-4b09-a1fa-f43fef3d0be5)\"" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" Jan 30 14:08:41 crc kubenswrapper[4793]: I0130 14:08:41.558316 4793 generic.go:334] "Generic (PLEG): container finished" podID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerID="78a1272f5a0efb9c0f9952508ceaecc1543daf837224cceb68be086ddee0cdbe" exitCode=0 Jan 30 14:08:41 crc kubenswrapper[4793]: I0130 14:08:41.558344 4793 generic.go:334] "Generic (PLEG): container finished" podID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerID="770623c7f72dcc371d6d0f171741332c80551d1140706f6273b2e8ffc6402658" exitCode=0 Jan 30 14:08:41 crc kubenswrapper[4793]: I0130 14:08:41.558364 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerDied","Data":"78a1272f5a0efb9c0f9952508ceaecc1543daf837224cceb68be086ddee0cdbe"} Jan 30 14:08:41 crc kubenswrapper[4793]: I0130 14:08:41.558388 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerDied","Data":"770623c7f72dcc371d6d0f171741332c80551d1140706f6273b2e8ffc6402658"} Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.593518 4793 generic.go:334] "Generic (PLEG): container finished" podID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerID="f455b4d10e53f36a56989caad1569b935b4a6126cea9aa339351b0f9175fbebd" exitCode=0 Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.593678 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerDied","Data":"f455b4d10e53f36a56989caad1569b935b4a6126cea9aa339351b0f9175fbebd"} Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.727944 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.791821 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-log-httpd\") pod \"86bca6e8-77db-4dad-a8d5-3b7718c60688\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.792160 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-sg-core-conf-yaml\") pod \"86bca6e8-77db-4dad-a8d5-3b7718c60688\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.792264 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-scripts\") pod \"86bca6e8-77db-4dad-a8d5-3b7718c60688\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.797702 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj9v7\" (UniqueName: \"kubernetes.io/projected/86bca6e8-77db-4dad-a8d5-3b7718c60688-kube-api-access-bj9v7\") pod \"86bca6e8-77db-4dad-a8d5-3b7718c60688\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.797767 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-combined-ca-bundle\") pod \"86bca6e8-77db-4dad-a8d5-3b7718c60688\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.797803 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-config-data\") pod \"86bca6e8-77db-4dad-a8d5-3b7718c60688\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.797858 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-run-httpd\") pod \"86bca6e8-77db-4dad-a8d5-3b7718c60688\" (UID: \"86bca6e8-77db-4dad-a8d5-3b7718c60688\") " Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.802611 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "86bca6e8-77db-4dad-a8d5-3b7718c60688" (UID: "86bca6e8-77db-4dad-a8d5-3b7718c60688"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.808189 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "86bca6e8-77db-4dad-a8d5-3b7718c60688" (UID: "86bca6e8-77db-4dad-a8d5-3b7718c60688"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.808901 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.808970 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86bca6e8-77db-4dad-a8d5-3b7718c60688-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.834696 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-scripts" (OuterVolumeSpecName: "scripts") pod "86bca6e8-77db-4dad-a8d5-3b7718c60688" (UID: "86bca6e8-77db-4dad-a8d5-3b7718c60688"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.843379 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86bca6e8-77db-4dad-a8d5-3b7718c60688-kube-api-access-bj9v7" (OuterVolumeSpecName: "kube-api-access-bj9v7") pod "86bca6e8-77db-4dad-a8d5-3b7718c60688" (UID: "86bca6e8-77db-4dad-a8d5-3b7718c60688"). InnerVolumeSpecName "kube-api-access-bj9v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.914839 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj9v7\" (UniqueName: \"kubernetes.io/projected/86bca6e8-77db-4dad-a8d5-3b7718c60688-kube-api-access-bj9v7\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.914881 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:42 crc kubenswrapper[4793]: I0130 14:08:42.989151 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "86bca6e8-77db-4dad-a8d5-3b7718c60688" (UID: "86bca6e8-77db-4dad-a8d5-3b7718c60688"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.004222 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86bca6e8-77db-4dad-a8d5-3b7718c60688" (UID: "86bca6e8-77db-4dad-a8d5-3b7718c60688"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.019420 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.019735 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.101010 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-config-data" (OuterVolumeSpecName: "config-data") pod "86bca6e8-77db-4dad-a8d5-3b7718c60688" (UID: "86bca6e8-77db-4dad-a8d5-3b7718c60688"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.121636 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86bca6e8-77db-4dad-a8d5-3b7718c60688-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.606716 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86bca6e8-77db-4dad-a8d5-3b7718c60688","Type":"ContainerDied","Data":"4b73fadc6c8c2f194f24f28709e01df912df317bb62ccab5847b10d6fe6ae833"} Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.607028 4793 scope.go:117] "RemoveContainer" containerID="78a1272f5a0efb9c0f9952508ceaecc1543daf837224cceb68be086ddee0cdbe" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.606846 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.645249 4793 scope.go:117] "RemoveContainer" containerID="952ea4bae6adab4daa0b82fc192ab0083da34e2f73d1e17c743c0bc6a664325e" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.654526 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.664620 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.681919 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:43 crc kubenswrapper[4793]: E0130 14:08:43.682297 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="sg-core" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.682307 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="sg-core" Jan 30 14:08:43 crc kubenswrapper[4793]: E0130 14:08:43.682323 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="ceilometer-central-agent" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.682330 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="ceilometer-central-agent" Jan 30 14:08:43 crc kubenswrapper[4793]: E0130 14:08:43.682340 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="proxy-httpd" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.682345 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="proxy-httpd" Jan 30 14:08:43 crc kubenswrapper[4793]: E0130 14:08:43.682364 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="ceilometer-notification-agent" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.682370 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="ceilometer-notification-agent" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.682547 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="sg-core" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.682562 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="ceilometer-central-agent" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.682573 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="ceilometer-notification-agent" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.682581 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" containerName="proxy-httpd" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.685241 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.689246 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.689957 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.694195 4793 scope.go:117] "RemoveContainer" containerID="f455b4d10e53f36a56989caad1569b935b4a6126cea9aa339351b0f9175fbebd" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.732109 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.733277 4793 scope.go:117] "RemoveContainer" containerID="770623c7f72dcc371d6d0f171741332c80551d1140706f6273b2e8ffc6402658" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.741353 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-log-httpd\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.741405 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzc8m\" (UniqueName: \"kubernetes.io/projected/773729ea-70f7-46f4-858a-3fbbf522a4cb-kube-api-access-xzc8m\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.741505 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.741530 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-scripts\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.741556 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-run-httpd\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.741651 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-config-data\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.741723 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.843717 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzc8m\" (UniqueName: \"kubernetes.io/projected/773729ea-70f7-46f4-858a-3fbbf522a4cb-kube-api-access-xzc8m\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.843823 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.843856 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-scripts\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.843891 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-run-httpd\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.843963 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-config-data\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.844018 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.844089 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-log-httpd\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.844665 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-log-httpd\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.846660 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-run-httpd\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.850022 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.850951 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-config-data\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.861307 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-scripts\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.864157 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzc8m\" (UniqueName: \"kubernetes.io/projected/773729ea-70f7-46f4-858a-3fbbf522a4cb-kube-api-access-xzc8m\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:43 crc kubenswrapper[4793]: I0130 14:08:43.864742 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " pod="openstack/ceilometer-0" Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.006222 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.031661 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.066280 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.066469 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="e61af9bc-c79d-4e81-a602-37afbdc017a5" containerName="kube-state-metrics" containerID="cri-o://7b7669483d549eb24b141c74941db71192f0f6e724c0813bbeee9ca2352f85e8" gracePeriod=30 Jan 30 14:08:44 crc kubenswrapper[4793]: E0130 14:08:44.175241 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode61af9bc_c79d_4e81_a602_37afbdc017a5.slice/crio-conmon-7b7669483d549eb24b141c74941db71192f0f6e724c0813bbeee9ca2352f85e8.scope\": RecentStats: unable to find data in memory cache]" Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.409865 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86bca6e8-77db-4dad-a8d5-3b7718c60688" path="/var/lib/kubelet/pods/86bca6e8-77db-4dad-a8d5-3b7718c60688/volumes" Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.626313 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.630925 4793 generic.go:334] "Generic (PLEG): container finished" podID="e61af9bc-c79d-4e81-a602-37afbdc017a5" containerID="7b7669483d549eb24b141c74941db71192f0f6e724c0813bbeee9ca2352f85e8" exitCode=2 Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.630993 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e61af9bc-c79d-4e81-a602-37afbdc017a5","Type":"ContainerDied","Data":"7b7669483d549eb24b141c74941db71192f0f6e724c0813bbeee9ca2352f85e8"} Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.878315 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.967753 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g555f\" (UniqueName: \"kubernetes.io/projected/e61af9bc-c79d-4e81-a602-37afbdc017a5-kube-api-access-g555f\") pod \"e61af9bc-c79d-4e81-a602-37afbdc017a5\" (UID: \"e61af9bc-c79d-4e81-a602-37afbdc017a5\") " Jan 30 14:08:44 crc kubenswrapper[4793]: I0130 14:08:44.984804 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e61af9bc-c79d-4e81-a602-37afbdc017a5-kube-api-access-g555f" (OuterVolumeSpecName: "kube-api-access-g555f") pod "e61af9bc-c79d-4e81-a602-37afbdc017a5" (UID: "e61af9bc-c79d-4e81-a602-37afbdc017a5"). InnerVolumeSpecName "kube-api-access-g555f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.069705 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g555f\" (UniqueName: \"kubernetes.io/projected/e61af9bc-c79d-4e81-a602-37afbdc017a5-kube-api-access-g555f\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.651793 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.651789 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e61af9bc-c79d-4e81-a602-37afbdc017a5","Type":"ContainerDied","Data":"71bf22217d9be03e116230139d0442df663407d89a0d201f8b40fe58cd8686cf"} Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.653072 4793 scope.go:117] "RemoveContainer" containerID="7b7669483d549eb24b141c74941db71192f0f6e724c0813bbeee9ca2352f85e8" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.656553 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerStarted","Data":"7dc962edb603898f31fe34f2b48e7775ea335507b82c1acbcf65c59db80b44b1"} Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.656604 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerStarted","Data":"eecf2aa20735ff086d97e3185c5a1181c5ec03a1c551f179de1e5ab7d6e9d69f"} Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.753097 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.780990 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.796187 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:08:45 crc kubenswrapper[4793]: E0130 14:08:45.796527 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e61af9bc-c79d-4e81-a602-37afbdc017a5" containerName="kube-state-metrics" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.796541 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e61af9bc-c79d-4e81-a602-37afbdc017a5" containerName="kube-state-metrics" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.796744 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e61af9bc-c79d-4e81-a602-37afbdc017a5" containerName="kube-state-metrics" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.797295 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.802509 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.802730 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.805903 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.891098 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.891365 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.891385 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.891469 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lpjr\" (UniqueName: \"kubernetes.io/projected/a3625667-be35-4d81-84f9-e00593f1c627-kube-api-access-8lpjr\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.993036 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lpjr\" (UniqueName: \"kubernetes.io/projected/a3625667-be35-4d81-84f9-e00593f1c627-kube-api-access-8lpjr\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.993362 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.993452 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:45 crc kubenswrapper[4793]: I0130 14:08:45.993550 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:46 crc kubenswrapper[4793]: I0130 14:08:46.000618 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:46 crc kubenswrapper[4793]: I0130 14:08:46.001167 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:46 crc kubenswrapper[4793]: I0130 14:08:46.001818 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3625667-be35-4d81-84f9-e00593f1c627-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:46 crc kubenswrapper[4793]: I0130 14:08:46.039710 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lpjr\" (UniqueName: \"kubernetes.io/projected/a3625667-be35-4d81-84f9-e00593f1c627-kube-api-access-8lpjr\") pod \"kube-state-metrics-0\" (UID: \"a3625667-be35-4d81-84f9-e00593f1c627\") " pod="openstack/kube-state-metrics-0" Jan 30 14:08:46 crc kubenswrapper[4793]: I0130 14:08:46.064695 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 14:08:46 crc kubenswrapper[4793]: I0130 14:08:46.417767 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e61af9bc-c79d-4e81-a602-37afbdc017a5" path="/var/lib/kubelet/pods/e61af9bc-c79d-4e81-a602-37afbdc017a5/volumes" Jan 30 14:08:46 crc kubenswrapper[4793]: I0130 14:08:46.667491 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerStarted","Data":"9d8788e45690dee8efc0dfa0689f7dbbda658385cae5d1fea43716b8efad2041"} Jan 30 14:08:46 crc kubenswrapper[4793]: I0130 14:08:46.774542 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 14:08:47 crc kubenswrapper[4793]: I0130 14:08:47.677923 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a3625667-be35-4d81-84f9-e00593f1c627","Type":"ContainerStarted","Data":"e7f9184db53386ef31e0793929c5ebc7d7e2d2ebb6c38c2a7b5886982a8e4476"} Jan 30 14:08:47 crc kubenswrapper[4793]: I0130 14:08:47.678271 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a3625667-be35-4d81-84f9-e00593f1c627","Type":"ContainerStarted","Data":"1843f750d363c51d0ba0072dae26fc1f3deb23f4082a149f1fe915f142a2a03f"} Jan 30 14:08:47 crc kubenswrapper[4793]: I0130 14:08:47.678292 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 14:08:47 crc kubenswrapper[4793]: I0130 14:08:47.683394 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerStarted","Data":"bc4b432bc8394955eab117617a3e4958a1a48374a1982d0569537d928437b6d7"} Jan 30 14:08:47 crc kubenswrapper[4793]: I0130 14:08:47.702095 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.34412437 podStartE2EDuration="2.70207739s" podCreationTimestamp="2026-01-30 14:08:45 +0000 UTC" firstStartedPulling="2026-01-30 14:08:46.75249181 +0000 UTC m=+1537.453840301" lastFinishedPulling="2026-01-30 14:08:47.11044484 +0000 UTC m=+1537.811793321" observedRunningTime="2026-01-30 14:08:47.693736248 +0000 UTC m=+1538.395084729" watchObservedRunningTime="2026-01-30 14:08:47.70207739 +0000 UTC m=+1538.403425881" Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.719451 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerStarted","Data":"7e3195426ef0018e7d03680ce368b57cacddb9796d8102941be6175b21f05dc0"} Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.721099 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="ceilometer-central-agent" containerID="cri-o://7dc962edb603898f31fe34f2b48e7775ea335507b82c1acbcf65c59db80b44b1" gracePeriod=30 Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.721532 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.721975 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="proxy-httpd" containerID="cri-o://7e3195426ef0018e7d03680ce368b57cacddb9796d8102941be6175b21f05dc0" gracePeriod=30 Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.722160 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="sg-core" containerID="cri-o://bc4b432bc8394955eab117617a3e4958a1a48374a1982d0569537d928437b6d7" gracePeriod=30 Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.722309 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="ceilometer-notification-agent" containerID="cri-o://9d8788e45690dee8efc0dfa0689f7dbbda658385cae5d1fea43716b8efad2041" gracePeriod=30 Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.730622 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.730986 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="afd812b0-55db-4cff-b0cd-4b18afe5a4be" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": dial tcp 10.217.0.150:9292: i/o timeout" Jan 30 14:08:50 crc kubenswrapper[4793]: I0130 14:08:50.743483 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.666082597 podStartE2EDuration="7.743463451s" podCreationTimestamp="2026-01-30 14:08:43 +0000 UTC" firstStartedPulling="2026-01-30 14:08:44.646396941 +0000 UTC m=+1535.347745432" lastFinishedPulling="2026-01-30 14:08:49.723777795 +0000 UTC m=+1540.425126286" observedRunningTime="2026-01-30 14:08:50.739743871 +0000 UTC m=+1541.441092382" watchObservedRunningTime="2026-01-30 14:08:50.743463451 +0000 UTC m=+1541.444811942" Jan 30 14:08:51 crc kubenswrapper[4793]: I0130 14:08:51.728768 4793 generic.go:334] "Generic (PLEG): container finished" podID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerID="7e3195426ef0018e7d03680ce368b57cacddb9796d8102941be6175b21f05dc0" exitCode=0 Jan 30 14:08:51 crc kubenswrapper[4793]: I0130 14:08:51.729145 4793 generic.go:334] "Generic (PLEG): container finished" podID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerID="bc4b432bc8394955eab117617a3e4958a1a48374a1982d0569537d928437b6d7" exitCode=2 Jan 30 14:08:51 crc kubenswrapper[4793]: I0130 14:08:51.729156 4793 generic.go:334] "Generic (PLEG): container finished" podID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerID="9d8788e45690dee8efc0dfa0689f7dbbda658385cae5d1fea43716b8efad2041" exitCode=0 Jan 30 14:08:51 crc kubenswrapper[4793]: I0130 14:08:51.728804 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerDied","Data":"7e3195426ef0018e7d03680ce368b57cacddb9796d8102941be6175b21f05dc0"} Jan 30 14:08:51 crc kubenswrapper[4793]: I0130 14:08:51.729197 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerDied","Data":"bc4b432bc8394955eab117617a3e4958a1a48374a1982d0569537d928437b6d7"} Jan 30 14:08:51 crc kubenswrapper[4793]: I0130 14:08:51.729214 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerDied","Data":"9d8788e45690dee8efc0dfa0689f7dbbda658385cae5d1fea43716b8efad2041"} Jan 30 14:08:52 crc kubenswrapper[4793]: I0130 14:08:52.398250 4793 scope.go:117] "RemoveContainer" containerID="e1ee447c1da4c22c8a8e3defd94a820c3fc867c7dfc1d7bd5bb248fe0d49e002" Jan 30 14:08:52 crc kubenswrapper[4793]: I0130 14:08:52.739478 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerStarted","Data":"320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98"} Jan 30 14:08:53 crc kubenswrapper[4793]: I0130 14:08:53.310188 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:08:53 crc kubenswrapper[4793]: I0130 14:08:53.754155 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" event={"ID":"4ba071cd-0f26-432d-809e-709cad1a1e64","Type":"ContainerStarted","Data":"90b9675474db2f014b16f6ff676632a8fb2215b39c16f9464ddb8818d9838269"} Jan 30 14:08:55 crc kubenswrapper[4793]: I0130 14:08:55.402810 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5b9fc5f8f6-nj7xv" Jan 30 14:08:55 crc kubenswrapper[4793]: I0130 14:08:55.427619 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" podStartSLOduration=3.542153091 podStartE2EDuration="37.427600989s" podCreationTimestamp="2026-01-30 14:08:18 +0000 UTC" firstStartedPulling="2026-01-30 14:08:19.254835123 +0000 UTC m=+1509.956183614" lastFinishedPulling="2026-01-30 14:08:53.140283021 +0000 UTC m=+1543.841631512" observedRunningTime="2026-01-30 14:08:53.779622395 +0000 UTC m=+1544.480970886" watchObservedRunningTime="2026-01-30 14:08:55.427600989 +0000 UTC m=+1546.128949480" Jan 30 14:08:55 crc kubenswrapper[4793]: I0130 14:08:55.472801 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b66cd9fcf-c94kp"] Jan 30 14:08:55 crc kubenswrapper[4793]: I0130 14:08:55.473219 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon-log" containerID="cri-o://448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c" gracePeriod=30 Jan 30 14:08:55 crc kubenswrapper[4793]: I0130 14:08:55.473315 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b66cd9fcf-c94kp" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" containerID="cri-o://320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98" gracePeriod=30 Jan 30 14:08:56 crc kubenswrapper[4793]: I0130 14:08:56.082393 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 14:08:56 crc kubenswrapper[4793]: I0130 14:08:56.806395 4793 generic.go:334] "Generic (PLEG): container finished" podID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerID="7dc962edb603898f31fe34f2b48e7775ea335507b82c1acbcf65c59db80b44b1" exitCode=0 Jan 30 14:08:56 crc kubenswrapper[4793]: I0130 14:08:56.806672 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerDied","Data":"7dc962edb603898f31fe34f2b48e7775ea335507b82c1acbcf65c59db80b44b1"} Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.007313 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.030280 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-run-httpd\") pod \"773729ea-70f7-46f4-858a-3fbbf522a4cb\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.030400 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-combined-ca-bundle\") pod \"773729ea-70f7-46f4-858a-3fbbf522a4cb\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.030448 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-sg-core-conf-yaml\") pod \"773729ea-70f7-46f4-858a-3fbbf522a4cb\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.030473 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzc8m\" (UniqueName: \"kubernetes.io/projected/773729ea-70f7-46f4-858a-3fbbf522a4cb-kube-api-access-xzc8m\") pod \"773729ea-70f7-46f4-858a-3fbbf522a4cb\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.030513 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-config-data\") pod \"773729ea-70f7-46f4-858a-3fbbf522a4cb\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.030700 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-scripts\") pod \"773729ea-70f7-46f4-858a-3fbbf522a4cb\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.030752 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-log-httpd\") pod \"773729ea-70f7-46f4-858a-3fbbf522a4cb\" (UID: \"773729ea-70f7-46f4-858a-3fbbf522a4cb\") " Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.030805 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "773729ea-70f7-46f4-858a-3fbbf522a4cb" (UID: "773729ea-70f7-46f4-858a-3fbbf522a4cb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.031227 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.031516 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "773729ea-70f7-46f4-858a-3fbbf522a4cb" (UID: "773729ea-70f7-46f4-858a-3fbbf522a4cb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.040484 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/773729ea-70f7-46f4-858a-3fbbf522a4cb-kube-api-access-xzc8m" (OuterVolumeSpecName: "kube-api-access-xzc8m") pod "773729ea-70f7-46f4-858a-3fbbf522a4cb" (UID: "773729ea-70f7-46f4-858a-3fbbf522a4cb"). InnerVolumeSpecName "kube-api-access-xzc8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.066490 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-scripts" (OuterVolumeSpecName: "scripts") pod "773729ea-70f7-46f4-858a-3fbbf522a4cb" (UID: "773729ea-70f7-46f4-858a-3fbbf522a4cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.113570 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "773729ea-70f7-46f4-858a-3fbbf522a4cb" (UID: "773729ea-70f7-46f4-858a-3fbbf522a4cb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.132578 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.132821 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773729ea-70f7-46f4-858a-3fbbf522a4cb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.132914 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.133125 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzc8m\" (UniqueName: \"kubernetes.io/projected/773729ea-70f7-46f4-858a-3fbbf522a4cb-kube-api-access-xzc8m\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.143783 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "773729ea-70f7-46f4-858a-3fbbf522a4cb" (UID: "773729ea-70f7-46f4-858a-3fbbf522a4cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.181747 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-config-data" (OuterVolumeSpecName: "config-data") pod "773729ea-70f7-46f4-858a-3fbbf522a4cb" (UID: "773729ea-70f7-46f4-858a-3fbbf522a4cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.234471 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.234507 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773729ea-70f7-46f4-858a-3fbbf522a4cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.818678 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773729ea-70f7-46f4-858a-3fbbf522a4cb","Type":"ContainerDied","Data":"eecf2aa20735ff086d97e3185c5a1181c5ec03a1c551f179de1e5ab7d6e9d69f"} Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.818795 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.819017 4793 scope.go:117] "RemoveContainer" containerID="7e3195426ef0018e7d03680ce368b57cacddb9796d8102941be6175b21f05dc0" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.851264 4793 scope.go:117] "RemoveContainer" containerID="bc4b432bc8394955eab117617a3e4958a1a48374a1982d0569537d928437b6d7" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.867566 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.872441 4793 scope.go:117] "RemoveContainer" containerID="9d8788e45690dee8efc0dfa0689f7dbbda658385cae5d1fea43716b8efad2041" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.914391 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.955567 4793 scope.go:117] "RemoveContainer" containerID="7dc962edb603898f31fe34f2b48e7775ea335507b82c1acbcf65c59db80b44b1" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.968864 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:57 crc kubenswrapper[4793]: E0130 14:08:57.969426 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="ceilometer-central-agent" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.969494 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="ceilometer-central-agent" Jan 30 14:08:57 crc kubenswrapper[4793]: E0130 14:08:57.969574 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="ceilometer-notification-agent" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.969633 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="ceilometer-notification-agent" Jan 30 14:08:57 crc kubenswrapper[4793]: E0130 14:08:57.969685 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="proxy-httpd" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.969741 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="proxy-httpd" Jan 30 14:08:57 crc kubenswrapper[4793]: E0130 14:08:57.969847 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="sg-core" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.969951 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="sg-core" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.970202 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="ceilometer-central-agent" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.970298 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="proxy-httpd" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.970357 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="ceilometer-notification-agent" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.970423 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" containerName="sg-core" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.972091 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.975867 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.976289 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.976476 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 14:08:57 crc kubenswrapper[4793]: I0130 14:08:57.980369 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.158768 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.158813 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-log-httpd\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.158832 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjh8n\" (UniqueName: \"kubernetes.io/projected/a1ae5858-557d-445a-b00f-cbdc514dc672-kube-api-access-sjh8n\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.158870 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-scripts\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.158892 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.159937 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-run-httpd\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.159985 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-config-data\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.160280 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.262310 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.262997 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-log-httpd\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.263162 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjh8n\" (UniqueName: \"kubernetes.io/projected/a1ae5858-557d-445a-b00f-cbdc514dc672-kube-api-access-sjh8n\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.263268 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-scripts\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.263348 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.263453 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-run-httpd\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.263525 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-config-data\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.263657 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.263557 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-log-httpd\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.263903 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-run-httpd\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.269967 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.271483 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-config-data\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.273676 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.276474 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.281256 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjh8n\" (UniqueName: \"kubernetes.io/projected/a1ae5858-557d-445a-b00f-cbdc514dc672-kube-api-access-sjh8n\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.281351 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-scripts\") pod \"ceilometer-0\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.293212 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.409838 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="773729ea-70f7-46f4-858a-3fbbf522a4cb" path="/var/lib/kubelet/pods/773729ea-70f7-46f4-858a-3fbbf522a4cb/volumes" Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.765233 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:08:58 crc kubenswrapper[4793]: I0130 14:08:58.830434 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerStarted","Data":"0a3be02686a9c4c880d5b9cfa276326d8b8efbc8e4a9d1cedd06cf45fa0269bc"} Jan 30 14:08:59 crc kubenswrapper[4793]: I0130 14:08:59.609547 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:08:59 crc kubenswrapper[4793]: I0130 14:08:59.847943 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerStarted","Data":"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f"} Jan 30 14:09:01 crc kubenswrapper[4793]: I0130 14:09:01.865911 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerStarted","Data":"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024"} Jan 30 14:09:01 crc kubenswrapper[4793]: I0130 14:09:01.866519 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerStarted","Data":"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d"} Jan 30 14:09:05 crc kubenswrapper[4793]: I0130 14:09:05.911879 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerStarted","Data":"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392"} Jan 30 14:09:05 crc kubenswrapper[4793]: I0130 14:09:05.913976 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 14:09:05 crc kubenswrapper[4793]: I0130 14:09:05.935360 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.8506229039999997 podStartE2EDuration="8.935343374s" podCreationTimestamp="2026-01-30 14:08:57 +0000 UTC" firstStartedPulling="2026-01-30 14:08:58.75838504 +0000 UTC m=+1549.459733531" lastFinishedPulling="2026-01-30 14:09:04.8431055 +0000 UTC m=+1555.544454001" observedRunningTime="2026-01-30 14:09:05.932925195 +0000 UTC m=+1556.634273696" watchObservedRunningTime="2026-01-30 14:09:05.935343374 +0000 UTC m=+1556.636691865" Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.756239 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.756953 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="ceilometer-central-agent" containerID="cri-o://c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f" gracePeriod=30 Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.757685 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="proxy-httpd" containerID="cri-o://325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392" gracePeriod=30 Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.757726 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="sg-core" containerID="cri-o://6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024" gracePeriod=30 Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.757767 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="ceilometer-notification-agent" containerID="cri-o://767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d" gracePeriod=30 Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.965528 4793 generic.go:334] "Generic (PLEG): container finished" podID="4ba071cd-0f26-432d-809e-709cad1a1e64" containerID="90b9675474db2f014b16f6ff676632a8fb2215b39c16f9464ddb8818d9838269" exitCode=0 Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.965604 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" event={"ID":"4ba071cd-0f26-432d-809e-709cad1a1e64","Type":"ContainerDied","Data":"90b9675474db2f014b16f6ff676632a8fb2215b39c16f9464ddb8818d9838269"} Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.976213 4793 generic.go:334] "Generic (PLEG): container finished" podID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerID="6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024" exitCode=2 Jan 30 14:09:09 crc kubenswrapper[4793]: I0130 14:09:09.976273 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerDied","Data":"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024"} Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.574782 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.761766 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-scripts\") pod \"a1ae5858-557d-445a-b00f-cbdc514dc672\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.761841 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-run-httpd\") pod \"a1ae5858-557d-445a-b00f-cbdc514dc672\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.761906 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-ceilometer-tls-certs\") pod \"a1ae5858-557d-445a-b00f-cbdc514dc672\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.762003 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-combined-ca-bundle\") pod \"a1ae5858-557d-445a-b00f-cbdc514dc672\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.762027 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjh8n\" (UniqueName: \"kubernetes.io/projected/a1ae5858-557d-445a-b00f-cbdc514dc672-kube-api-access-sjh8n\") pod \"a1ae5858-557d-445a-b00f-cbdc514dc672\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.762115 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-log-httpd\") pod \"a1ae5858-557d-445a-b00f-cbdc514dc672\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.762169 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-sg-core-conf-yaml\") pod \"a1ae5858-557d-445a-b00f-cbdc514dc672\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.762224 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-config-data\") pod \"a1ae5858-557d-445a-b00f-cbdc514dc672\" (UID: \"a1ae5858-557d-445a-b00f-cbdc514dc672\") " Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.763629 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a1ae5858-557d-445a-b00f-cbdc514dc672" (UID: "a1ae5858-557d-445a-b00f-cbdc514dc672"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.763723 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a1ae5858-557d-445a-b00f-cbdc514dc672" (UID: "a1ae5858-557d-445a-b00f-cbdc514dc672"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.768225 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ae5858-557d-445a-b00f-cbdc514dc672-kube-api-access-sjh8n" (OuterVolumeSpecName: "kube-api-access-sjh8n") pod "a1ae5858-557d-445a-b00f-cbdc514dc672" (UID: "a1ae5858-557d-445a-b00f-cbdc514dc672"). InnerVolumeSpecName "kube-api-access-sjh8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.768518 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-scripts" (OuterVolumeSpecName: "scripts") pod "a1ae5858-557d-445a-b00f-cbdc514dc672" (UID: "a1ae5858-557d-445a-b00f-cbdc514dc672"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.797001 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a1ae5858-557d-445a-b00f-cbdc514dc672" (UID: "a1ae5858-557d-445a-b00f-cbdc514dc672"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.810505 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a1ae5858-557d-445a-b00f-cbdc514dc672" (UID: "a1ae5858-557d-445a-b00f-cbdc514dc672"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.832973 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1ae5858-557d-445a-b00f-cbdc514dc672" (UID: "a1ae5858-557d-445a-b00f-cbdc514dc672"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.860174 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-config-data" (OuterVolumeSpecName: "config-data") pod "a1ae5858-557d-445a-b00f-cbdc514dc672" (UID: "a1ae5858-557d-445a-b00f-cbdc514dc672"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.864308 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.864331 4793 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.864346 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.864356 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjh8n\" (UniqueName: \"kubernetes.io/projected/a1ae5858-557d-445a-b00f-cbdc514dc672-kube-api-access-sjh8n\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.864364 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1ae5858-557d-445a-b00f-cbdc514dc672-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.864373 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.864380 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.864388 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1ae5858-557d-445a-b00f-cbdc514dc672-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.987326 4793 generic.go:334] "Generic (PLEG): container finished" podID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerID="325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392" exitCode=0 Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.987356 4793 generic.go:334] "Generic (PLEG): container finished" podID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerID="767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d" exitCode=0 Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.987366 4793 generic.go:334] "Generic (PLEG): container finished" podID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerID="c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f" exitCode=0 Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.987546 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.988714 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerDied","Data":"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392"} Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.988764 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerDied","Data":"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d"} Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.988778 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerDied","Data":"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f"} Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.988789 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1ae5858-557d-445a-b00f-cbdc514dc672","Type":"ContainerDied","Data":"0a3be02686a9c4c880d5b9cfa276326d8b8efbc8e4a9d1cedd06cf45fa0269bc"} Jan 30 14:09:10 crc kubenswrapper[4793]: I0130 14:09:10.988809 4793 scope.go:117] "RemoveContainer" containerID="325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.018149 4793 scope.go:117] "RemoveContainer" containerID="6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.024817 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.036024 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.055891 4793 scope.go:117] "RemoveContainer" containerID="767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061188 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:09:11 crc kubenswrapper[4793]: E0130 14:09:11.061549 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="sg-core" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061569 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="sg-core" Jan 30 14:09:11 crc kubenswrapper[4793]: E0130 14:09:11.061593 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="proxy-httpd" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061601 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="proxy-httpd" Jan 30 14:09:11 crc kubenswrapper[4793]: E0130 14:09:11.061626 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="ceilometer-central-agent" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061632 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="ceilometer-central-agent" Jan 30 14:09:11 crc kubenswrapper[4793]: E0130 14:09:11.061644 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="ceilometer-notification-agent" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061651 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="ceilometer-notification-agent" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061839 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="ceilometer-central-agent" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061866 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="proxy-httpd" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061877 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="ceilometer-notification-agent" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.061893 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" containerName="sg-core" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.063482 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.067517 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.067600 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.067917 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.087402 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.099316 4793 scope.go:117] "RemoveContainer" containerID="c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.124531 4793 scope.go:117] "RemoveContainer" containerID="325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392" Jan 30 14:09:11 crc kubenswrapper[4793]: E0130 14:09:11.125062 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392\": container with ID starting with 325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392 not found: ID does not exist" containerID="325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.125106 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392"} err="failed to get container status \"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392\": rpc error: code = NotFound desc = could not find container \"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392\": container with ID starting with 325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392 not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.125131 4793 scope.go:117] "RemoveContainer" containerID="6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024" Jan 30 14:09:11 crc kubenswrapper[4793]: E0130 14:09:11.125494 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024\": container with ID starting with 6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024 not found: ID does not exist" containerID="6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.125529 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024"} err="failed to get container status \"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024\": rpc error: code = NotFound desc = could not find container \"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024\": container with ID starting with 6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024 not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.125556 4793 scope.go:117] "RemoveContainer" containerID="767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d" Jan 30 14:09:11 crc kubenswrapper[4793]: E0130 14:09:11.126513 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d\": container with ID starting with 767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d not found: ID does not exist" containerID="767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.126540 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d"} err="failed to get container status \"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d\": rpc error: code = NotFound desc = could not find container \"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d\": container with ID starting with 767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.126557 4793 scope.go:117] "RemoveContainer" containerID="c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f" Jan 30 14:09:11 crc kubenswrapper[4793]: E0130 14:09:11.126782 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f\": container with ID starting with c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f not found: ID does not exist" containerID="c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.126815 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f"} err="failed to get container status \"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f\": rpc error: code = NotFound desc = could not find container \"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f\": container with ID starting with c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.126843 4793 scope.go:117] "RemoveContainer" containerID="325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.127073 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392"} err="failed to get container status \"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392\": rpc error: code = NotFound desc = could not find container \"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392\": container with ID starting with 325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392 not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.127092 4793 scope.go:117] "RemoveContainer" containerID="6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.127273 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024"} err="failed to get container status \"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024\": rpc error: code = NotFound desc = could not find container \"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024\": container with ID starting with 6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024 not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.127291 4793 scope.go:117] "RemoveContainer" containerID="767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.127511 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d"} err="failed to get container status \"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d\": rpc error: code = NotFound desc = could not find container \"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d\": container with ID starting with 767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.127536 4793 scope.go:117] "RemoveContainer" containerID="c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.129330 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f"} err="failed to get container status \"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f\": rpc error: code = NotFound desc = could not find container \"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f\": container with ID starting with c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.129352 4793 scope.go:117] "RemoveContainer" containerID="325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.129677 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392"} err="failed to get container status \"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392\": rpc error: code = NotFound desc = could not find container \"325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392\": container with ID starting with 325d8110e7ba10f4eb9761eed4162383085e5195033aed622da7b9d50d865392 not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.129725 4793 scope.go:117] "RemoveContainer" containerID="6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.130025 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024"} err="failed to get container status \"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024\": rpc error: code = NotFound desc = could not find container \"6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024\": container with ID starting with 6463b212a3365755e13753fcc2e400d6eb591754c03dcceeef676403517da024 not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.130058 4793 scope.go:117] "RemoveContainer" containerID="767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.130286 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d"} err="failed to get container status \"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d\": rpc error: code = NotFound desc = could not find container \"767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d\": container with ID starting with 767167ac54d29fdbff0ccf390ea7b4d631945ddaa75a1b107131f70397f0ca7d not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.130309 4793 scope.go:117] "RemoveContainer" containerID="c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.130519 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f"} err="failed to get container status \"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f\": rpc error: code = NotFound desc = could not find container \"c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f\": container with ID starting with c439a4b7cfc4804b426f2b4aae386445c6b8c8d9932f6097e9c09350502d0c7f not found: ID does not exist" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.168692 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-log-httpd\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.168734 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-scripts\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.168752 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.169537 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.169632 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.169672 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss2qk\" (UniqueName: \"kubernetes.io/projected/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-kube-api-access-ss2qk\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.169876 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-config-data\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.169946 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-run-httpd\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.271349 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-run-httpd\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.271801 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-log-httpd\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.271812 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-run-httpd\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.271831 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-scripts\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.271849 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.271871 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.271927 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.271951 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss2qk\" (UniqueName: \"kubernetes.io/projected/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-kube-api-access-ss2qk\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.272012 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-config-data\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.272034 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-log-httpd\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.284525 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.284736 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-scripts\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.286252 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.286593 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-config-data\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.290166 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.296503 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss2qk\" (UniqueName: \"kubernetes.io/projected/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-kube-api-access-ss2qk\") pod \"ceilometer-0\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.380034 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.391144 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.476661 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-scripts\") pod \"4ba071cd-0f26-432d-809e-709cad1a1e64\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.477579 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-config-data\") pod \"4ba071cd-0f26-432d-809e-709cad1a1e64\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.477793 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xntcf\" (UniqueName: \"kubernetes.io/projected/4ba071cd-0f26-432d-809e-709cad1a1e64-kube-api-access-xntcf\") pod \"4ba071cd-0f26-432d-809e-709cad1a1e64\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.477937 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-combined-ca-bundle\") pod \"4ba071cd-0f26-432d-809e-709cad1a1e64\" (UID: \"4ba071cd-0f26-432d-809e-709cad1a1e64\") " Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.482261 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-scripts" (OuterVolumeSpecName: "scripts") pod "4ba071cd-0f26-432d-809e-709cad1a1e64" (UID: "4ba071cd-0f26-432d-809e-709cad1a1e64"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.484375 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba071cd-0f26-432d-809e-709cad1a1e64-kube-api-access-xntcf" (OuterVolumeSpecName: "kube-api-access-xntcf") pod "4ba071cd-0f26-432d-809e-709cad1a1e64" (UID: "4ba071cd-0f26-432d-809e-709cad1a1e64"). InnerVolumeSpecName "kube-api-access-xntcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.519365 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-config-data" (OuterVolumeSpecName: "config-data") pod "4ba071cd-0f26-432d-809e-709cad1a1e64" (UID: "4ba071cd-0f26-432d-809e-709cad1a1e64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.527657 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ba071cd-0f26-432d-809e-709cad1a1e64" (UID: "4ba071cd-0f26-432d-809e-709cad1a1e64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.582745 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xntcf\" (UniqueName: \"kubernetes.io/projected/4ba071cd-0f26-432d-809e-709cad1a1e64-kube-api-access-xntcf\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.583158 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.583172 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.583184 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba071cd-0f26-432d-809e-709cad1a1e64-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.870629 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.876322 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:09:11 crc kubenswrapper[4793]: I0130 14:09:11.996099 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerStarted","Data":"ee58efa07fa4fa9d8d8272dc1241f3340556be6a43a1bbd522489b6d1c064654"} Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.000513 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" event={"ID":"4ba071cd-0f26-432d-809e-709cad1a1e64","Type":"ContainerDied","Data":"10458f2044a1485dd49f34389e009c76947a11228dc091b7963498c198351281"} Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.000555 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10458f2044a1485dd49f34389e009c76947a11228dc091b7963498c198351281" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.000648 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w8lcj" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.153400 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 14:09:12 crc kubenswrapper[4793]: E0130 14:09:12.153832 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba071cd-0f26-432d-809e-709cad1a1e64" containerName="nova-cell0-conductor-db-sync" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.153854 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba071cd-0f26-432d-809e-709cad1a1e64" containerName="nova-cell0-conductor-db-sync" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.154088 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba071cd-0f26-432d-809e-709cad1a1e64" containerName="nova-cell0-conductor-db-sync" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.154778 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.160439 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.160979 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rgtrf" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.161164 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.192485 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.192977 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.193127 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kptrf\" (UniqueName: \"kubernetes.io/projected/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-kube-api-access-kptrf\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.295030 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.295135 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.295203 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kptrf\" (UniqueName: \"kubernetes.io/projected/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-kube-api-access-kptrf\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.301559 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.315153 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.318823 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kptrf\" (UniqueName: \"kubernetes.io/projected/9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7-kube-api-access-kptrf\") pod \"nova-cell0-conductor-0\" (UID: \"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7\") " pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.409374 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1ae5858-557d-445a-b00f-cbdc514dc672" path="/var/lib/kubelet/pods/a1ae5858-557d-445a-b00f-cbdc514dc672/volumes" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.413792 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.413859 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:09:12 crc kubenswrapper[4793]: I0130 14:09:12.487823 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:13 crc kubenswrapper[4793]: I0130 14:09:13.005953 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 14:09:13 crc kubenswrapper[4793]: I0130 14:09:13.030826 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerStarted","Data":"14c5e5290d598f46c34890c9a841a85b87492f2237d89b7ffdeee5e8f99bb6c1"} Jan 30 14:09:14 crc kubenswrapper[4793]: I0130 14:09:14.043443 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7","Type":"ContainerStarted","Data":"892fa6c1c229b673316d98c55fca5515772f1f763e89daeb8075c544712fa9e7"} Jan 30 14:09:14 crc kubenswrapper[4793]: I0130 14:09:14.043750 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:14 crc kubenswrapper[4793]: I0130 14:09:14.043761 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7","Type":"ContainerStarted","Data":"69a0e2f160ecb9f8836eb2fb71c299df78a38363288d4d95b3e3ec748113587d"} Jan 30 14:09:14 crc kubenswrapper[4793]: I0130 14:09:14.046112 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerStarted","Data":"9d08b2914bdb19816d93c8f01afbbd1f5c6993dc4e90cc049ba23dc54276f1e5"} Jan 30 14:09:14 crc kubenswrapper[4793]: I0130 14:09:14.046153 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerStarted","Data":"35435e31f9baea1e4c9263c0e0abafdae31a9145d621c42772e5dd4993b88a8f"} Jan 30 14:09:14 crc kubenswrapper[4793]: I0130 14:09:14.069716 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.069697738 podStartE2EDuration="2.069697738s" podCreationTimestamp="2026-01-30 14:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:14.060854683 +0000 UTC m=+1564.762203184" watchObservedRunningTime="2026-01-30 14:09:14.069697738 +0000 UTC m=+1564.771046219" Jan 30 14:09:15 crc kubenswrapper[4793]: I0130 14:09:15.056372 4793 generic.go:334] "Generic (PLEG): container finished" podID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerID="320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98" exitCode=1 Jan 30 14:09:15 crc kubenswrapper[4793]: I0130 14:09:15.056570 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerDied","Data":"320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98"} Jan 30 14:09:15 crc kubenswrapper[4793]: I0130 14:09:15.057146 4793 scope.go:117] "RemoveContainer" containerID="e1ee447c1da4c22c8a8e3defd94a820c3fc867c7dfc1d7bd5bb248fe0d49e002" Jan 30 14:09:17 crc kubenswrapper[4793]: I0130 14:09:17.082482 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerStarted","Data":"22e3b2b4f8af8c074e2701dd075aff341ca69019ed98db94c94c5c8c8fac5cc3"} Jan 30 14:09:17 crc kubenswrapper[4793]: I0130 14:09:17.083066 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 14:09:17 crc kubenswrapper[4793]: I0130 14:09:17.117500 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.987627619 podStartE2EDuration="6.117484905s" podCreationTimestamp="2026-01-30 14:09:11 +0000 UTC" firstStartedPulling="2026-01-30 14:09:11.876057997 +0000 UTC m=+1562.577406488" lastFinishedPulling="2026-01-30 14:09:16.005915283 +0000 UTC m=+1566.707263774" observedRunningTime="2026-01-30 14:09:17.112893893 +0000 UTC m=+1567.814242384" watchObservedRunningTime="2026-01-30 14:09:17.117484905 +0000 UTC m=+1567.818833396" Jan 30 14:09:22 crc kubenswrapper[4793]: I0130 14:09:22.528432 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.051605 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-75k58"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.053026 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.069880 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.070697 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.089220 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-75k58"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.156190 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxpm8\" (UniqueName: \"kubernetes.io/projected/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-kube-api-access-fxpm8\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.156236 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-scripts\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.156339 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-config-data\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.156357 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.258266 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxpm8\" (UniqueName: \"kubernetes.io/projected/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-kube-api-access-fxpm8\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.258317 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-scripts\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.258432 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-config-data\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.258463 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.267980 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.268679 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-scripts\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.277833 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-config-data\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.305762 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxpm8\" (UniqueName: \"kubernetes.io/projected/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-kube-api-access-fxpm8\") pod \"nova-cell0-cell-mapping-75k58\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.370114 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.445826 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.489365 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.495279 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.500117 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.501593 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.528809 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.540113 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.562552 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.576681 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-config-data\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.576718 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.576752 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-config-data\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.576779 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.576794 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4h9t\" (UniqueName: \"kubernetes.io/projected/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-kube-api-access-t4h9t\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.576817 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdgh7\" (UniqueName: \"kubernetes.io/projected/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-kube-api-access-cdgh7\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.576887 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-logs\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.576955 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-logs\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.690975 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-logs\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691018 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-config-data\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691057 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691096 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-config-data\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691128 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691148 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4h9t\" (UniqueName: \"kubernetes.io/projected/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-kube-api-access-t4h9t\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691175 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdgh7\" (UniqueName: \"kubernetes.io/projected/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-kube-api-access-cdgh7\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691245 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-logs\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691672 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-logs\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.691921 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-logs\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.714371 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.719296 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.727272 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-config-data\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.730919 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.732641 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.744849 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4h9t\" (UniqueName: \"kubernetes.io/projected/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-kube-api-access-t4h9t\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.745112 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.782363 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-config-data\") pod \"nova-api-0\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.784660 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdgh7\" (UniqueName: \"kubernetes.io/projected/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-kube-api-access-cdgh7\") pod \"nova-metadata-0\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.848100 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.857531 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.858752 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.865343 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.889173 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.895447 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.899745 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjzmv\" (UniqueName: \"kubernetes.io/projected/ea153b39-273a-489d-8964-8cfddfc788e1-kube-api-access-hjzmv\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.899862 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-config-data\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.899914 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.906140 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-n2s4l"] Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.907867 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.913496 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:23 crc kubenswrapper[4793]: I0130 14:09:23.936483 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-n2s4l"] Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002033 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002098 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m4td\" (UniqueName: \"kubernetes.io/projected/946dbfc0-785c-4159-af93-83c11dd8d7e1-kube-api-access-8m4td\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002140 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002170 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002204 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-config\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002222 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjzmv\" (UniqueName: \"kubernetes.io/projected/ea153b39-273a-489d-8964-8cfddfc788e1-kube-api-access-hjzmv\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002239 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002265 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj6mz\" (UniqueName: \"kubernetes.io/projected/1817ab34-b020-4268-b88c-126dc437c966-kube-api-access-nj6mz\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002316 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002337 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-svc\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002373 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.002411 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-config-data\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.009415 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.013871 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-config-data\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.028313 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjzmv\" (UniqueName: \"kubernetes.io/projected/ea153b39-273a-489d-8964-8cfddfc788e1-kube-api-access-hjzmv\") pod \"nova-scheduler-0\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.106885 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107532 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107587 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-svc\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107625 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107681 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107708 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m4td\" (UniqueName: \"kubernetes.io/projected/946dbfc0-785c-4159-af93-83c11dd8d7e1-kube-api-access-8m4td\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107746 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107777 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-config\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107796 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.107818 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj6mz\" (UniqueName: \"kubernetes.io/projected/1817ab34-b020-4268-b88c-126dc437c966-kube-api-access-nj6mz\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.108883 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.108983 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-svc\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.109818 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.110448 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-config\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.110979 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.120041 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.120163 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.133500 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m4td\" (UniqueName: \"kubernetes.io/projected/946dbfc0-785c-4159-af93-83c11dd8d7e1-kube-api-access-8m4td\") pod \"nova-cell1-novncproxy-0\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.141730 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj6mz\" (UniqueName: \"kubernetes.io/projected/1817ab34-b020-4268-b88c-126dc437c966-kube-api-access-nj6mz\") pod \"dnsmasq-dns-757b4f8459-n2s4l\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.195800 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.224670 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-75k58"] Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.282325 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.465453 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.618162 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:24 crc kubenswrapper[4793]: W0130 14:09:24.644897 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0bc7ab8_eaab_4f40_888a_e31e12e7e773.slice/crio-0f7b1e63c6586afd494bffb3cd6108f0bd39ae0f843d930d8e6a29831d4dc1ca WatchSource:0}: Error finding container 0f7b1e63c6586afd494bffb3cd6108f0bd39ae0f843d930d8e6a29831d4dc1ca: Status 404 returned error can't find the container with id 0f7b1e63c6586afd494bffb3cd6108f0bd39ae0f843d930d8e6a29831d4dc1ca Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.845559 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:24 crc kubenswrapper[4793]: W0130 14:09:24.851512 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea153b39_273a_489d_8964_8cfddfc788e1.slice/crio-b79e4c94f61e795e4871651dd3246ac5673f935fef8bbf454e20718af00efe9b WatchSource:0}: Error finding container b79e4c94f61e795e4871651dd3246ac5673f935fef8bbf454e20718af00efe9b: Status 404 returned error can't find the container with id b79e4c94f61e795e4871651dd3246ac5673f935fef8bbf454e20718af00efe9b Jan 30 14:09:24 crc kubenswrapper[4793]: I0130 14:09:24.965874 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:09:24 crc kubenswrapper[4793]: W0130 14:09:24.971054 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod946dbfc0_785c_4159_af93_83c11dd8d7e1.slice/crio-1fefeee02348cae466643167ff300193a0079c4a4093e5a2e4f25f3447fef7bf WatchSource:0}: Error finding container 1fefeee02348cae466643167ff300193a0079c4a4093e5a2e4f25f3447fef7bf: Status 404 returned error can't find the container with id 1fefeee02348cae466643167ff300193a0079c4a4093e5a2e4f25f3447fef7bf Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.060986 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ml6ks"] Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.062250 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.064787 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.064976 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.100955 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ml6ks"] Jan 30 14:09:25 crc kubenswrapper[4793]: W0130 14:09:25.131949 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1817ab34_b020_4268_b88c_126dc437c966.slice/crio-51b9f220023c2df2b6b701ab065f62d75d5f6cee33ff2d1780a9cb8c10fdb12d WatchSource:0}: Error finding container 51b9f220023c2df2b6b701ab065f62d75d5f6cee33ff2d1780a9cb8c10fdb12d: Status 404 returned error can't find the container with id 51b9f220023c2df2b6b701ab065f62d75d5f6cee33ff2d1780a9cb8c10fdb12d Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.138271 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvphb\" (UniqueName: \"kubernetes.io/projected/45bc0c92-8817-447f-a591-d593d49d1b22-kube-api-access-pvphb\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.138363 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.138398 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-scripts\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.138530 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-config-data\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.139411 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-n2s4l"] Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.188671 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"946dbfc0-785c-4159-af93-83c11dd8d7e1","Type":"ContainerStarted","Data":"1fefeee02348cae466643167ff300193a0079c4a4093e5a2e4f25f3447fef7bf"} Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.192777 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0bc7ab8-eaab-4f40-888a-e31e12e7e773","Type":"ContainerStarted","Data":"0f7b1e63c6586afd494bffb3cd6108f0bd39ae0f843d930d8e6a29831d4dc1ca"} Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.195689 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e4e3a7e-0fdd-4b58-956c-968b50689ce5","Type":"ContainerStarted","Data":"6dbb7f15722c7e00d18758c1026e64f9f4f3aa22d601bb8b93724467cdca1d2e"} Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.198874 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" event={"ID":"1817ab34-b020-4268-b88c-126dc437c966","Type":"ContainerStarted","Data":"51b9f220023c2df2b6b701ab065f62d75d5f6cee33ff2d1780a9cb8c10fdb12d"} Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.203991 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ea153b39-273a-489d-8964-8cfddfc788e1","Type":"ContainerStarted","Data":"b79e4c94f61e795e4871651dd3246ac5673f935fef8bbf454e20718af00efe9b"} Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.206517 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-75k58" event={"ID":"ebcc9239-aedb-41d4-bac8-d03c56c76f4a","Type":"ContainerStarted","Data":"c3407efb2fdb58b554465a66ada59f330d66ff60faa105c9e72328442584be37"} Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.206543 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-75k58" event={"ID":"ebcc9239-aedb-41d4-bac8-d03c56c76f4a","Type":"ContainerStarted","Data":"b0dc24251680382ac5368495457f086b3ed5dd146adcca5ddd5d5c1ebfc039cc"} Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.227974 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-75k58" podStartSLOduration=2.227956068 podStartE2EDuration="2.227956068s" podCreationTimestamp="2026-01-30 14:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:25.223559912 +0000 UTC m=+1575.924908403" watchObservedRunningTime="2026-01-30 14:09:25.227956068 +0000 UTC m=+1575.929304559" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.240172 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.240244 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-scripts\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.240393 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-config-data\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.240460 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvphb\" (UniqueName: \"kubernetes.io/projected/45bc0c92-8817-447f-a591-d593d49d1b22-kube-api-access-pvphb\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.245781 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-config-data\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.245927 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-scripts\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.246425 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.258991 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvphb\" (UniqueName: \"kubernetes.io/projected/45bc0c92-8817-447f-a591-d593d49d1b22-kube-api-access-pvphb\") pod \"nova-cell1-conductor-db-sync-ml6ks\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.394972 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:25 crc kubenswrapper[4793]: I0130 14:09:25.968116 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ml6ks"] Jan 30 14:09:25 crc kubenswrapper[4793]: W0130 14:09:25.982615 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45bc0c92_8817_447f_a591_d593d49d1b22.slice/crio-a679a51b0c6e6137e2ec5414eeb13b529804081ead1233b8cc65b0c2cf5027d0 WatchSource:0}: Error finding container a679a51b0c6e6137e2ec5414eeb13b529804081ead1233b8cc65b0c2cf5027d0: Status 404 returned error can't find the container with id a679a51b0c6e6137e2ec5414eeb13b529804081ead1233b8cc65b0c2cf5027d0 Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.097748 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.171692 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-config-data\") pod \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.171805 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-scripts\") pod \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.171880 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wstbg\" (UniqueName: \"kubernetes.io/projected/ecab991a-220f-4b09-a1fa-f43fef3d0be5-kube-api-access-wstbg\") pod \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.172097 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ecab991a-220f-4b09-a1fa-f43fef3d0be5-horizon-secret-key\") pod \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.172181 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecab991a-220f-4b09-a1fa-f43fef3d0be5-logs\") pod \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\" (UID: \"ecab991a-220f-4b09-a1fa-f43fef3d0be5\") " Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.172873 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecab991a-220f-4b09-a1fa-f43fef3d0be5-logs" (OuterVolumeSpecName: "logs") pod "ecab991a-220f-4b09-a1fa-f43fef3d0be5" (UID: "ecab991a-220f-4b09-a1fa-f43fef3d0be5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.181495 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecab991a-220f-4b09-a1fa-f43fef3d0be5-kube-api-access-wstbg" (OuterVolumeSpecName: "kube-api-access-wstbg") pod "ecab991a-220f-4b09-a1fa-f43fef3d0be5" (UID: "ecab991a-220f-4b09-a1fa-f43fef3d0be5"). InnerVolumeSpecName "kube-api-access-wstbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.182253 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecab991a-220f-4b09-a1fa-f43fef3d0be5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ecab991a-220f-4b09-a1fa-f43fef3d0be5" (UID: "ecab991a-220f-4b09-a1fa-f43fef3d0be5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.212942 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-config-data" (OuterVolumeSpecName: "config-data") pod "ecab991a-220f-4b09-a1fa-f43fef3d0be5" (UID: "ecab991a-220f-4b09-a1fa-f43fef3d0be5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.225322 4793 generic.go:334] "Generic (PLEG): container finished" podID="1817ab34-b020-4268-b88c-126dc437c966" containerID="7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b" exitCode=0 Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.225410 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" event={"ID":"1817ab34-b020-4268-b88c-126dc437c966","Type":"ContainerDied","Data":"7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b"} Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.232394 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" event={"ID":"45bc0c92-8817-447f-a591-d593d49d1b22","Type":"ContainerStarted","Data":"a679a51b0c6e6137e2ec5414eeb13b529804081ead1233b8cc65b0c2cf5027d0"} Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.235701 4793 generic.go:334] "Generic (PLEG): container finished" podID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerID="448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c" exitCode=137 Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.236736 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b66cd9fcf-c94kp" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.237056 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerDied","Data":"448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c"} Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.237392 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b66cd9fcf-c94kp" event={"ID":"ecab991a-220f-4b09-a1fa-f43fef3d0be5","Type":"ContainerDied","Data":"abb829370f6052fa5b93898ca6acb8788a4543ea051b65ba7f0f97b896bb3dd6"} Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.237495 4793 scope.go:117] "RemoveContainer" containerID="320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.241806 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-scripts" (OuterVolumeSpecName: "scripts") pod "ecab991a-220f-4b09-a1fa-f43fef3d0be5" (UID: "ecab991a-220f-4b09-a1fa-f43fef3d0be5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.274365 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wstbg\" (UniqueName: \"kubernetes.io/projected/ecab991a-220f-4b09-a1fa-f43fef3d0be5-kube-api-access-wstbg\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.274413 4793 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ecab991a-220f-4b09-a1fa-f43fef3d0be5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.274425 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecab991a-220f-4b09-a1fa-f43fef3d0be5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.274438 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.274454 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ecab991a-220f-4b09-a1fa-f43fef3d0be5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.575835 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b66cd9fcf-c94kp"] Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.593968 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6b66cd9fcf-c94kp"] Jan 30 14:09:26 crc kubenswrapper[4793]: I0130 14:09:26.634496 4793 scope.go:117] "RemoveContainer" containerID="448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c" Jan 30 14:09:27 crc kubenswrapper[4793]: I0130 14:09:27.029977 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:27 crc kubenswrapper[4793]: I0130 14:09:27.043359 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:09:27 crc kubenswrapper[4793]: I0130 14:09:27.253560 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" event={"ID":"45bc0c92-8817-447f-a591-d593d49d1b22","Type":"ContainerStarted","Data":"d5dca6794b88409e9b00ca4874a836a8fc72adc63350f5d3d74d780410a0a920"} Jan 30 14:09:27 crc kubenswrapper[4793]: I0130 14:09:27.568584 4793 scope.go:117] "RemoveContainer" containerID="320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98" Jan 30 14:09:27 crc kubenswrapper[4793]: E0130 14:09:27.569379 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98\": container with ID starting with 320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98 not found: ID does not exist" containerID="320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98" Jan 30 14:09:27 crc kubenswrapper[4793]: I0130 14:09:27.569411 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98"} err="failed to get container status \"320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98\": rpc error: code = NotFound desc = could not find container \"320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98\": container with ID starting with 320a00651a5cc9c0b9401b451e5c1326bbe4e7b6fc7cf0953e150690901d3d98 not found: ID does not exist" Jan 30 14:09:27 crc kubenswrapper[4793]: I0130 14:09:27.569429 4793 scope.go:117] "RemoveContainer" containerID="448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c" Jan 30 14:09:27 crc kubenswrapper[4793]: E0130 14:09:27.570635 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c\": container with ID starting with 448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c not found: ID does not exist" containerID="448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c" Jan 30 14:09:27 crc kubenswrapper[4793]: I0130 14:09:27.570685 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c"} err="failed to get container status \"448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c\": rpc error: code = NotFound desc = could not find container \"448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c\": container with ID starting with 448f485acfbb6261049c49aa27ba791b31b5e0499fbb192527473b272bec225c not found: ID does not exist" Jan 30 14:09:28 crc kubenswrapper[4793]: I0130 14:09:28.410013 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" path="/var/lib/kubelet/pods/ecab991a-220f-4b09-a1fa-f43fef3d0be5/volumes" Jan 30 14:09:29 crc kubenswrapper[4793]: I0130 14:09:29.278771 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" event={"ID":"1817ab34-b020-4268-b88c-126dc437c966","Type":"ContainerStarted","Data":"62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad"} Jan 30 14:09:29 crc kubenswrapper[4793]: I0130 14:09:29.279030 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:29 crc kubenswrapper[4793]: I0130 14:09:29.305463 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" podStartSLOduration=6.305443743 podStartE2EDuration="6.305443743s" podCreationTimestamp="2026-01-30 14:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:29.303265621 +0000 UTC m=+1580.004614112" watchObservedRunningTime="2026-01-30 14:09:29.305443743 +0000 UTC m=+1580.006792234" Jan 30 14:09:29 crc kubenswrapper[4793]: I0130 14:09:29.307625 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" podStartSLOduration=4.307614126 podStartE2EDuration="4.307614126s" podCreationTimestamp="2026-01-30 14:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:27.270724023 +0000 UTC m=+1577.972072504" watchObservedRunningTime="2026-01-30 14:09:29.307614126 +0000 UTC m=+1580.008962617" Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.295933 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"946dbfc0-785c-4159-af93-83c11dd8d7e1","Type":"ContainerStarted","Data":"32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d"} Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.296063 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="946dbfc0-785c-4159-af93-83c11dd8d7e1" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d" gracePeriod=30 Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.300849 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0bc7ab8-eaab-4f40-888a-e31e12e7e773","Type":"ContainerStarted","Data":"3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3"} Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.300889 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0bc7ab8-eaab-4f40-888a-e31e12e7e773","Type":"ContainerStarted","Data":"b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003"} Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.304064 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e4e3a7e-0fdd-4b58-956c-968b50689ce5","Type":"ContainerStarted","Data":"9f6ee31d211e47671b169133a4e2a9a54ed40bd52183b29bfffe92ebc8f125fa"} Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.304142 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e4e3a7e-0fdd-4b58-956c-968b50689ce5","Type":"ContainerStarted","Data":"84709903c10f8750c54fa7831d7f3c2e5b04ef1090b9b22520f4fc7ef4db1065"} Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.304261 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerName="nova-metadata-log" containerID="cri-o://84709903c10f8750c54fa7831d7f3c2e5b04ef1090b9b22520f4fc7ef4db1065" gracePeriod=30 Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.304504 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerName="nova-metadata-metadata" containerID="cri-o://9f6ee31d211e47671b169133a4e2a9a54ed40bd52183b29bfffe92ebc8f125fa" gracePeriod=30 Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.311569 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ea153b39-273a-489d-8964-8cfddfc788e1","Type":"ContainerStarted","Data":"aedab7e636cfadaa8cce12328c9b2c0d0677045f1058517845d9c2fc6e4ef3ee"} Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.318430 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.682230321 podStartE2EDuration="7.318413618s" podCreationTimestamp="2026-01-30 14:09:23 +0000 UTC" firstStartedPulling="2026-01-30 14:09:24.975891483 +0000 UTC m=+1575.677239984" lastFinishedPulling="2026-01-30 14:09:28.61207479 +0000 UTC m=+1579.313423281" observedRunningTime="2026-01-30 14:09:30.314267587 +0000 UTC m=+1581.015616078" watchObservedRunningTime="2026-01-30 14:09:30.318413618 +0000 UTC m=+1581.019762109" Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.339828 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.57150707 podStartE2EDuration="7.339811816s" podCreationTimestamp="2026-01-30 14:09:23 +0000 UTC" firstStartedPulling="2026-01-30 14:09:24.856111283 +0000 UTC m=+1575.557459774" lastFinishedPulling="2026-01-30 14:09:28.624416029 +0000 UTC m=+1579.325764520" observedRunningTime="2026-01-30 14:09:30.333652567 +0000 UTC m=+1581.035001058" watchObservedRunningTime="2026-01-30 14:09:30.339811816 +0000 UTC m=+1581.041160307" Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.358800 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.388411484 podStartE2EDuration="7.358781906s" podCreationTimestamp="2026-01-30 14:09:23 +0000 UTC" firstStartedPulling="2026-01-30 14:09:24.652617873 +0000 UTC m=+1575.353966364" lastFinishedPulling="2026-01-30 14:09:28.622988295 +0000 UTC m=+1579.324336786" observedRunningTime="2026-01-30 14:09:30.352647306 +0000 UTC m=+1581.053995807" watchObservedRunningTime="2026-01-30 14:09:30.358781906 +0000 UTC m=+1581.060130397" Jan 30 14:09:30 crc kubenswrapper[4793]: I0130 14:09:30.381465 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.289222252 podStartE2EDuration="7.381447004s" podCreationTimestamp="2026-01-30 14:09:23 +0000 UTC" firstStartedPulling="2026-01-30 14:09:24.514608891 +0000 UTC m=+1575.215957382" lastFinishedPulling="2026-01-30 14:09:28.606833643 +0000 UTC m=+1579.308182134" observedRunningTime="2026-01-30 14:09:30.374791023 +0000 UTC m=+1581.076139514" watchObservedRunningTime="2026-01-30 14:09:30.381447004 +0000 UTC m=+1581.082795495" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.340410 4793 generic.go:334] "Generic (PLEG): container finished" podID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerID="9f6ee31d211e47671b169133a4e2a9a54ed40bd52183b29bfffe92ebc8f125fa" exitCode=0 Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.340443 4793 generic.go:334] "Generic (PLEG): container finished" podID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerID="84709903c10f8750c54fa7831d7f3c2e5b04ef1090b9b22520f4fc7ef4db1065" exitCode=143 Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.340505 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e4e3a7e-0fdd-4b58-956c-968b50689ce5","Type":"ContainerDied","Data":"9f6ee31d211e47671b169133a4e2a9a54ed40bd52183b29bfffe92ebc8f125fa"} Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.340540 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e4e3a7e-0fdd-4b58-956c-968b50689ce5","Type":"ContainerDied","Data":"84709903c10f8750c54fa7831d7f3c2e5b04ef1090b9b22520f4fc7ef4db1065"} Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.636034 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.811786 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdgh7\" (UniqueName: \"kubernetes.io/projected/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-kube-api-access-cdgh7\") pod \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.812128 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-config-data\") pod \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.812234 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-combined-ca-bundle\") pod \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.812568 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-logs\") pod \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\" (UID: \"7e4e3a7e-0fdd-4b58-956c-968b50689ce5\") " Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.813058 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-logs" (OuterVolumeSpecName: "logs") pod "7e4e3a7e-0fdd-4b58-956c-968b50689ce5" (UID: "7e4e3a7e-0fdd-4b58-956c-968b50689ce5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.813657 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.833467 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-kube-api-access-cdgh7" (OuterVolumeSpecName: "kube-api-access-cdgh7") pod "7e4e3a7e-0fdd-4b58-956c-968b50689ce5" (UID: "7e4e3a7e-0fdd-4b58-956c-968b50689ce5"). InnerVolumeSpecName "kube-api-access-cdgh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.855560 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e4e3a7e-0fdd-4b58-956c-968b50689ce5" (UID: "7e4e3a7e-0fdd-4b58-956c-968b50689ce5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.867951 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-config-data" (OuterVolumeSpecName: "config-data") pod "7e4e3a7e-0fdd-4b58-956c-968b50689ce5" (UID: "7e4e3a7e-0fdd-4b58-956c-968b50689ce5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.915897 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdgh7\" (UniqueName: \"kubernetes.io/projected/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-kube-api-access-cdgh7\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.915953 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:31 crc kubenswrapper[4793]: I0130 14:09:31.915970 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4e3a7e-0fdd-4b58-956c-968b50689ce5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.354806 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e4e3a7e-0fdd-4b58-956c-968b50689ce5","Type":"ContainerDied","Data":"6dbb7f15722c7e00d18758c1026e64f9f4f3aa22d601bb8b93724467cdca1d2e"} Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.354866 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.354883 4793 scope.go:117] "RemoveContainer" containerID="9f6ee31d211e47671b169133a4e2a9a54ed40bd52183b29bfffe92ebc8f125fa" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.397295 4793 scope.go:117] "RemoveContainer" containerID="84709903c10f8750c54fa7831d7f3c2e5b04ef1090b9b22520f4fc7ef4db1065" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.421661 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.421704 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.435259 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:32 crc kubenswrapper[4793]: E0130 14:09:32.435821 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerName="nova-metadata-log" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.435838 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerName="nova-metadata-log" Jan 30 14:09:32 crc kubenswrapper[4793]: E0130 14:09:32.435854 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.435862 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: E0130 14:09:32.435878 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.435887 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: E0130 14:09:32.435897 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerName="nova-metadata-metadata" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.435905 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerName="nova-metadata-metadata" Jan 30 14:09:32 crc kubenswrapper[4793]: E0130 14:09:32.435917 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.435924 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: E0130 14:09:32.435947 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon-log" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.435954 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon-log" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.436232 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerName="nova-metadata-log" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.436252 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.436267 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon-log" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.436286 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.436301 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.436315 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" containerName="nova-metadata-metadata" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.436332 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: E0130 14:09:32.436570 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.436612 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecab991a-220f-4b09-a1fa-f43fef3d0be5" containerName="horizon" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.437602 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.443582 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.443904 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.468875 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.539287 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.539382 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7r2b\" (UniqueName: \"kubernetes.io/projected/dc77fb59-5785-42af-8629-c3bd9e024983-kube-api-access-b7r2b\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.539417 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.539442 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-config-data\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.539884 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc77fb59-5785-42af-8629-c3bd9e024983-logs\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.642361 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.642463 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7r2b\" (UniqueName: \"kubernetes.io/projected/dc77fb59-5785-42af-8629-c3bd9e024983-kube-api-access-b7r2b\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.642514 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.642558 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-config-data\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.642716 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc77fb59-5785-42af-8629-c3bd9e024983-logs\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.643230 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc77fb59-5785-42af-8629-c3bd9e024983-logs\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.647445 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.647515 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-config-data\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.648546 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.667533 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7r2b\" (UniqueName: \"kubernetes.io/projected/dc77fb59-5785-42af-8629-c3bd9e024983-kube-api-access-b7r2b\") pod \"nova-metadata-0\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " pod="openstack/nova-metadata-0" Jan 30 14:09:32 crc kubenswrapper[4793]: I0130 14:09:32.767533 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:33 crc kubenswrapper[4793]: I0130 14:09:33.297655 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:33 crc kubenswrapper[4793]: I0130 14:09:33.367268 4793 generic.go:334] "Generic (PLEG): container finished" podID="ebcc9239-aedb-41d4-bac8-d03c56c76f4a" containerID="c3407efb2fdb58b554465a66ada59f330d66ff60faa105c9e72328442584be37" exitCode=0 Jan 30 14:09:33 crc kubenswrapper[4793]: I0130 14:09:33.367339 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-75k58" event={"ID":"ebcc9239-aedb-41d4-bac8-d03c56c76f4a","Type":"ContainerDied","Data":"c3407efb2fdb58b554465a66ada59f330d66ff60faa105c9e72328442584be37"} Jan 30 14:09:33 crc kubenswrapper[4793]: I0130 14:09:33.373586 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc77fb59-5785-42af-8629-c3bd9e024983","Type":"ContainerStarted","Data":"7c4f0710cca9ec558ca9e50b3847b8e52ce3fc8d37d022a77990843a2d1c1719"} Jan 30 14:09:33 crc kubenswrapper[4793]: I0130 14:09:33.889842 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 14:09:33 crc kubenswrapper[4793]: I0130 14:09:33.890206 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.107636 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.107966 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.148342 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.196366 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.284685 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.380236 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t5wk9"] Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.384142 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" podUID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerName="dnsmasq-dns" containerID="cri-o://b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9" gracePeriod=10 Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.426322 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e4e3a7e-0fdd-4b58-956c-968b50689ce5" path="/var/lib/kubelet/pods/7e4e3a7e-0fdd-4b58-956c-968b50689ce5/volumes" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.450950 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc77fb59-5785-42af-8629-c3bd9e024983","Type":"ContainerStarted","Data":"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d"} Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.453277 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc77fb59-5785-42af-8629-c3bd9e024983","Type":"ContainerStarted","Data":"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b"} Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.476343 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.486697 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" podUID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.166:5353: connect: connection refused" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.507752 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.507728432 podStartE2EDuration="2.507728432s" podCreationTimestamp="2026-01-30 14:09:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:34.446220492 +0000 UTC m=+1585.147568993" watchObservedRunningTime="2026-01-30 14:09:34.507728432 +0000 UTC m=+1585.209076933" Jan 30 14:09:34 crc kubenswrapper[4793]: I0130 14:09:34.953291 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:34.998973 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:34.999380 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.088356 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-scripts\") pod \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.088407 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxpm8\" (UniqueName: \"kubernetes.io/projected/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-kube-api-access-fxpm8\") pod \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.088508 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-combined-ca-bundle\") pod \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.088565 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-config-data\") pod \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\" (UID: \"ebcc9239-aedb-41d4-bac8-d03c56c76f4a\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.107508 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-kube-api-access-fxpm8" (OuterVolumeSpecName: "kube-api-access-fxpm8") pod "ebcc9239-aedb-41d4-bac8-d03c56c76f4a" (UID: "ebcc9239-aedb-41d4-bac8-d03c56c76f4a"). InnerVolumeSpecName "kube-api-access-fxpm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.120990 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-scripts" (OuterVolumeSpecName: "scripts") pod "ebcc9239-aedb-41d4-bac8-d03c56c76f4a" (UID: "ebcc9239-aedb-41d4-bac8-d03c56c76f4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.124390 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebcc9239-aedb-41d4-bac8-d03c56c76f4a" (UID: "ebcc9239-aedb-41d4-bac8-d03c56c76f4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.130363 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-config-data" (OuterVolumeSpecName: "config-data") pod "ebcc9239-aedb-41d4-bac8-d03c56c76f4a" (UID: "ebcc9239-aedb-41d4-bac8-d03c56c76f4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.190963 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.191006 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.191018 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.191032 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxpm8\" (UniqueName: \"kubernetes.io/projected/ebcc9239-aedb-41d4-bac8-d03c56c76f4a-kube-api-access-fxpm8\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.256588 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.393849 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-swift-storage-0\") pod \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.394198 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzc2t\" (UniqueName: \"kubernetes.io/projected/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-kube-api-access-lzc2t\") pod \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.394228 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-sb\") pod \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.394315 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-nb\") pod \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.394417 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-config\") pod \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.394735 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-svc\") pod \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\" (UID: \"bbe3cabf-7884-41df-adac-ad1bf7e76bf9\") " Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.398843 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-kube-api-access-lzc2t" (OuterVolumeSpecName: "kube-api-access-lzc2t") pod "bbe3cabf-7884-41df-adac-ad1bf7e76bf9" (UID: "bbe3cabf-7884-41df-adac-ad1bf7e76bf9"). InnerVolumeSpecName "kube-api-access-lzc2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.427673 4793 generic.go:334] "Generic (PLEG): container finished" podID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerID="b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9" exitCode=0 Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.427751 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" event={"ID":"bbe3cabf-7884-41df-adac-ad1bf7e76bf9","Type":"ContainerDied","Data":"b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9"} Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.427778 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" event={"ID":"bbe3cabf-7884-41df-adac-ad1bf7e76bf9","Type":"ContainerDied","Data":"067cddf5e14c681c5ac59422d446368a0d6a95f771b27ce5c72d8b49b5b509a7"} Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.427821 4793 scope.go:117] "RemoveContainer" containerID="b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.427991 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t5wk9" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.453681 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-75k58" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.457237 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-75k58" event={"ID":"ebcc9239-aedb-41d4-bac8-d03c56c76f4a","Type":"ContainerDied","Data":"b0dc24251680382ac5368495457f086b3ed5dd146adcca5ddd5d5c1ebfc039cc"} Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.457296 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0dc24251680382ac5368495457f086b3ed5dd146adcca5ddd5d5c1ebfc039cc" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.461180 4793 scope.go:117] "RemoveContainer" containerID="b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.480483 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bbe3cabf-7884-41df-adac-ad1bf7e76bf9" (UID: "bbe3cabf-7884-41df-adac-ad1bf7e76bf9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.483662 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-config" (OuterVolumeSpecName: "config") pod "bbe3cabf-7884-41df-adac-ad1bf7e76bf9" (UID: "bbe3cabf-7884-41df-adac-ad1bf7e76bf9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.492840 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bbe3cabf-7884-41df-adac-ad1bf7e76bf9" (UID: "bbe3cabf-7884-41df-adac-ad1bf7e76bf9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.497379 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzc2t\" (UniqueName: \"kubernetes.io/projected/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-kube-api-access-lzc2t\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.497406 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.497415 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.497425 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.511858 4793 scope.go:117] "RemoveContainer" containerID="b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9" Jan 30 14:09:35 crc kubenswrapper[4793]: E0130 14:09:35.512275 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9\": container with ID starting with b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9 not found: ID does not exist" containerID="b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.512306 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9"} err="failed to get container status \"b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9\": rpc error: code = NotFound desc = could not find container \"b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9\": container with ID starting with b2bd82b3ef9b12d4048865614052bcb67d0ec723ece31054962871704e93d8e9 not found: ID does not exist" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.512331 4793 scope.go:117] "RemoveContainer" containerID="b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74" Jan 30 14:09:35 crc kubenswrapper[4793]: E0130 14:09:35.512587 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74\": container with ID starting with b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74 not found: ID does not exist" containerID="b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.512611 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74"} err="failed to get container status \"b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74\": rpc error: code = NotFound desc = could not find container \"b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74\": container with ID starting with b5aaeff4c68e5838b9fcdd13758af4f2dfc383ee485d5f4e5788ecb5c605bc74 not found: ID does not exist" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.568289 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bbe3cabf-7884-41df-adac-ad1bf7e76bf9" (UID: "bbe3cabf-7884-41df-adac-ad1bf7e76bf9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.570193 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bbe3cabf-7884-41df-adac-ad1bf7e76bf9" (UID: "bbe3cabf-7884-41df-adac-ad1bf7e76bf9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.583986 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.584209 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-log" containerID="cri-o://3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3" gracePeriod=30 Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.584701 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-api" containerID="cri-o://b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003" gracePeriod=30 Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.601470 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.603437 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbe3cabf-7884-41df-adac-ad1bf7e76bf9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.667660 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.719509 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:35 crc kubenswrapper[4793]: E0130 14:09:35.759377 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebcc9239_aedb_41d4_bac8_d03c56c76f4a.slice/crio-b0dc24251680382ac5368495457f086b3ed5dd146adcca5ddd5d5c1ebfc039cc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0bc7ab8_eaab_4f40_888a_e31e12e7e773.slice/crio-3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebcc9239_aedb_41d4_bac8_d03c56c76f4a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0bc7ab8_eaab_4f40_888a_e31e12e7e773.slice/crio-conmon-3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3.scope\": RecentStats: unable to find data in memory cache]" Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.766489 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t5wk9"] Jan 30 14:09:35 crc kubenswrapper[4793]: I0130 14:09:35.774748 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t5wk9"] Jan 30 14:09:36 crc kubenswrapper[4793]: I0130 14:09:36.409772 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" path="/var/lib/kubelet/pods/bbe3cabf-7884-41df-adac-ad1bf7e76bf9/volumes" Jan 30 14:09:36 crc kubenswrapper[4793]: I0130 14:09:36.463378 4793 generic.go:334] "Generic (PLEG): container finished" podID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerID="3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3" exitCode=143 Jan 30 14:09:36 crc kubenswrapper[4793]: I0130 14:09:36.463564 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" containerName="nova-metadata-log" containerID="cri-o://f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b" gracePeriod=30 Jan 30 14:09:36 crc kubenswrapper[4793]: I0130 14:09:36.463793 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0bc7ab8-eaab-4f40-888a-e31e12e7e773","Type":"ContainerDied","Data":"3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3"} Jan 30 14:09:36 crc kubenswrapper[4793]: I0130 14:09:36.463891 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ea153b39-273a-489d-8964-8cfddfc788e1" containerName="nova-scheduler-scheduler" containerID="cri-o://aedab7e636cfadaa8cce12328c9b2c0d0677045f1058517845d9c2fc6e4ef3ee" gracePeriod=30 Jan 30 14:09:36 crc kubenswrapper[4793]: I0130 14:09:36.464128 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" containerName="nova-metadata-metadata" containerID="cri-o://02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d" gracePeriod=30 Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.032676 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.134808 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7r2b\" (UniqueName: \"kubernetes.io/projected/dc77fb59-5785-42af-8629-c3bd9e024983-kube-api-access-b7r2b\") pod \"dc77fb59-5785-42af-8629-c3bd9e024983\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.134964 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-nova-metadata-tls-certs\") pod \"dc77fb59-5785-42af-8629-c3bd9e024983\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.135158 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-combined-ca-bundle\") pod \"dc77fb59-5785-42af-8629-c3bd9e024983\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.135199 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-config-data\") pod \"dc77fb59-5785-42af-8629-c3bd9e024983\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.135805 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc77fb59-5785-42af-8629-c3bd9e024983-logs\") pod \"dc77fb59-5785-42af-8629-c3bd9e024983\" (UID: \"dc77fb59-5785-42af-8629-c3bd9e024983\") " Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.136098 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc77fb59-5785-42af-8629-c3bd9e024983-logs" (OuterVolumeSpecName: "logs") pod "dc77fb59-5785-42af-8629-c3bd9e024983" (UID: "dc77fb59-5785-42af-8629-c3bd9e024983"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.136534 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc77fb59-5785-42af-8629-c3bd9e024983-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.140622 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc77fb59-5785-42af-8629-c3bd9e024983-kube-api-access-b7r2b" (OuterVolumeSpecName: "kube-api-access-b7r2b") pod "dc77fb59-5785-42af-8629-c3bd9e024983" (UID: "dc77fb59-5785-42af-8629-c3bd9e024983"). InnerVolumeSpecName "kube-api-access-b7r2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.161708 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc77fb59-5785-42af-8629-c3bd9e024983" (UID: "dc77fb59-5785-42af-8629-c3bd9e024983"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.177003 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-config-data" (OuterVolumeSpecName: "config-data") pod "dc77fb59-5785-42af-8629-c3bd9e024983" (UID: "dc77fb59-5785-42af-8629-c3bd9e024983"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.204287 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "dc77fb59-5785-42af-8629-c3bd9e024983" (UID: "dc77fb59-5785-42af-8629-c3bd9e024983"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.238075 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7r2b\" (UniqueName: \"kubernetes.io/projected/dc77fb59-5785-42af-8629-c3bd9e024983-kube-api-access-b7r2b\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.238110 4793 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.238120 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.238129 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc77fb59-5785-42af-8629-c3bd9e024983-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.475374 4793 generic.go:334] "Generic (PLEG): container finished" podID="45bc0c92-8817-447f-a591-d593d49d1b22" containerID="d5dca6794b88409e9b00ca4874a836a8fc72adc63350f5d3d74d780410a0a920" exitCode=0 Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.476720 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" event={"ID":"45bc0c92-8817-447f-a591-d593d49d1b22","Type":"ContainerDied","Data":"d5dca6794b88409e9b00ca4874a836a8fc72adc63350f5d3d74d780410a0a920"} Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.488172 4793 generic.go:334] "Generic (PLEG): container finished" podID="dc77fb59-5785-42af-8629-c3bd9e024983" containerID="02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d" exitCode=0 Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.488201 4793 generic.go:334] "Generic (PLEG): container finished" podID="dc77fb59-5785-42af-8629-c3bd9e024983" containerID="f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b" exitCode=143 Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.488236 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc77fb59-5785-42af-8629-c3bd9e024983","Type":"ContainerDied","Data":"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d"} Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.488472 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc77fb59-5785-42af-8629-c3bd9e024983","Type":"ContainerDied","Data":"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b"} Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.488485 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc77fb59-5785-42af-8629-c3bd9e024983","Type":"ContainerDied","Data":"7c4f0710cca9ec558ca9e50b3847b8e52ce3fc8d37d022a77990843a2d1c1719"} Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.488500 4793 scope.go:117] "RemoveContainer" containerID="02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.488784 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.510359 4793 generic.go:334] "Generic (PLEG): container finished" podID="ea153b39-273a-489d-8964-8cfddfc788e1" containerID="aedab7e636cfadaa8cce12328c9b2c0d0677045f1058517845d9c2fc6e4ef3ee" exitCode=0 Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.510401 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ea153b39-273a-489d-8964-8cfddfc788e1","Type":"ContainerDied","Data":"aedab7e636cfadaa8cce12328c9b2c0d0677045f1058517845d9c2fc6e4ef3ee"} Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.554011 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.562428 4793 scope.go:117] "RemoveContainer" containerID="f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.565583 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575102 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:37 crc kubenswrapper[4793]: E0130 14:09:37.575495 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerName="init" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575508 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerName="init" Jan 30 14:09:37 crc kubenswrapper[4793]: E0130 14:09:37.575521 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" containerName="nova-metadata-metadata" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575528 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" containerName="nova-metadata-metadata" Jan 30 14:09:37 crc kubenswrapper[4793]: E0130 14:09:37.575549 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerName="dnsmasq-dns" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575556 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerName="dnsmasq-dns" Jan 30 14:09:37 crc kubenswrapper[4793]: E0130 14:09:37.575578 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebcc9239-aedb-41d4-bac8-d03c56c76f4a" containerName="nova-manage" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575583 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebcc9239-aedb-41d4-bac8-d03c56c76f4a" containerName="nova-manage" Jan 30 14:09:37 crc kubenswrapper[4793]: E0130 14:09:37.575590 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" containerName="nova-metadata-log" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575597 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" containerName="nova-metadata-log" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575755 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" containerName="nova-metadata-log" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575770 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe3cabf-7884-41df-adac-ad1bf7e76bf9" containerName="dnsmasq-dns" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575783 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" containerName="nova-metadata-metadata" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.575793 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebcc9239-aedb-41d4-bac8-d03c56c76f4a" containerName="nova-manage" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.577106 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.582764 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.582764 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.596376 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.611409 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.628363 4793 scope.go:117] "RemoveContainer" containerID="02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d" Jan 30 14:09:37 crc kubenswrapper[4793]: E0130 14:09:37.628780 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d\": container with ID starting with 02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d not found: ID does not exist" containerID="02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.628818 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d"} err="failed to get container status \"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d\": rpc error: code = NotFound desc = could not find container \"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d\": container with ID starting with 02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d not found: ID does not exist" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.628838 4793 scope.go:117] "RemoveContainer" containerID="f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b" Jan 30 14:09:37 crc kubenswrapper[4793]: E0130 14:09:37.629187 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b\": container with ID starting with f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b not found: ID does not exist" containerID="f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.629225 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b"} err="failed to get container status \"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b\": rpc error: code = NotFound desc = could not find container \"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b\": container with ID starting with f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b not found: ID does not exist" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.629240 4793 scope.go:117] "RemoveContainer" containerID="02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.629466 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d"} err="failed to get container status \"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d\": rpc error: code = NotFound desc = could not find container \"02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d\": container with ID starting with 02b6f4660424c7b041c9dc80a960e604fd35c6add9e4ce45531277917b79f46d not found: ID does not exist" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.629481 4793 scope.go:117] "RemoveContainer" containerID="f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.629718 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b"} err="failed to get container status \"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b\": rpc error: code = NotFound desc = could not find container \"f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b\": container with ID starting with f55178228d6bd70bd488b1a0c1d35b7623eb8cfb38e15230b5c9e58c2336a27b not found: ID does not exist" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.650284 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.650327 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.650385 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49ed6c75-bf0d-4f2f-a470-42fd54e304da-logs\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.650465 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-config-data\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.650506 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kzp9\" (UniqueName: \"kubernetes.io/projected/49ed6c75-bf0d-4f2f-a470-42fd54e304da-kube-api-access-7kzp9\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.752914 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-config-data\") pod \"ea153b39-273a-489d-8964-8cfddfc788e1\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.753294 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjzmv\" (UniqueName: \"kubernetes.io/projected/ea153b39-273a-489d-8964-8cfddfc788e1-kube-api-access-hjzmv\") pod \"ea153b39-273a-489d-8964-8cfddfc788e1\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.753416 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-combined-ca-bundle\") pod \"ea153b39-273a-489d-8964-8cfddfc788e1\" (UID: \"ea153b39-273a-489d-8964-8cfddfc788e1\") " Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.753831 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49ed6c75-bf0d-4f2f-a470-42fd54e304da-logs\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.754034 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-config-data\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.754193 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kzp9\" (UniqueName: \"kubernetes.io/projected/49ed6c75-bf0d-4f2f-a470-42fd54e304da-kube-api-access-7kzp9\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.754332 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.754475 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.756631 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49ed6c75-bf0d-4f2f-a470-42fd54e304da-logs\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.758604 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-config-data\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.758810 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea153b39-273a-489d-8964-8cfddfc788e1-kube-api-access-hjzmv" (OuterVolumeSpecName: "kube-api-access-hjzmv") pod "ea153b39-273a-489d-8964-8cfddfc788e1" (UID: "ea153b39-273a-489d-8964-8cfddfc788e1"). InnerVolumeSpecName "kube-api-access-hjzmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.760660 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.763234 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.776780 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kzp9\" (UniqueName: \"kubernetes.io/projected/49ed6c75-bf0d-4f2f-a470-42fd54e304da-kube-api-access-7kzp9\") pod \"nova-metadata-0\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " pod="openstack/nova-metadata-0" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.787525 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-config-data" (OuterVolumeSpecName: "config-data") pod "ea153b39-273a-489d-8964-8cfddfc788e1" (UID: "ea153b39-273a-489d-8964-8cfddfc788e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.802451 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea153b39-273a-489d-8964-8cfddfc788e1" (UID: "ea153b39-273a-489d-8964-8cfddfc788e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.856022 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.856070 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjzmv\" (UniqueName: \"kubernetes.io/projected/ea153b39-273a-489d-8964-8cfddfc788e1-kube-api-access-hjzmv\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.856081 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea153b39-273a-489d-8964-8cfddfc788e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:37 crc kubenswrapper[4793]: I0130 14:09:37.929983 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.375579 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:09:38 crc kubenswrapper[4793]: W0130 14:09:38.382468 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49ed6c75_bf0d_4f2f_a470_42fd54e304da.slice/crio-8e827d18d94a36e1032ee13a7b09882361977c3cc27e172ae22dfb68a0554721 WatchSource:0}: Error finding container 8e827d18d94a36e1032ee13a7b09882361977c3cc27e172ae22dfb68a0554721: Status 404 returned error can't find the container with id 8e827d18d94a36e1032ee13a7b09882361977c3cc27e172ae22dfb68a0554721 Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.412755 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc77fb59-5785-42af-8629-c3bd9e024983" path="/var/lib/kubelet/pods/dc77fb59-5785-42af-8629-c3bd9e024983/volumes" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.526503 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ea153b39-273a-489d-8964-8cfddfc788e1","Type":"ContainerDied","Data":"b79e4c94f61e795e4871651dd3246ac5673f935fef8bbf454e20718af00efe9b"} Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.526832 4793 scope.go:117] "RemoveContainer" containerID="aedab7e636cfadaa8cce12328c9b2c0d0677045f1058517845d9c2fc6e4ef3ee" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.526768 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.535870 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49ed6c75-bf0d-4f2f-a470-42fd54e304da","Type":"ContainerStarted","Data":"8e827d18d94a36e1032ee13a7b09882361977c3cc27e172ae22dfb68a0554721"} Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.552673 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.559406 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.582616 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:38 crc kubenswrapper[4793]: E0130 14:09:38.583001 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea153b39-273a-489d-8964-8cfddfc788e1" containerName="nova-scheduler-scheduler" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.583018 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea153b39-273a-489d-8964-8cfddfc788e1" containerName="nova-scheduler-scheduler" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.583230 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea153b39-273a-489d-8964-8cfddfc788e1" containerName="nova-scheduler-scheduler" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.583804 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.587206 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.615710 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.680956 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7x6x\" (UniqueName: \"kubernetes.io/projected/b0772278-2936-43a7-b8e8-255d72a26a46-kube-api-access-r7x6x\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.681030 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-config-data\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.681087 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.783081 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7x6x\" (UniqueName: \"kubernetes.io/projected/b0772278-2936-43a7-b8e8-255d72a26a46-kube-api-access-r7x6x\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.783164 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-config-data\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.783221 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.790912 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.791365 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-config-data\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.803195 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7x6x\" (UniqueName: \"kubernetes.io/projected/b0772278-2936-43a7-b8e8-255d72a26a46-kube-api-access-r7x6x\") pod \"nova-scheduler-0\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.864760 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.907562 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.987402 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-config-data\") pod \"45bc0c92-8817-447f-a591-d593d49d1b22\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.987548 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvphb\" (UniqueName: \"kubernetes.io/projected/45bc0c92-8817-447f-a591-d593d49d1b22-kube-api-access-pvphb\") pod \"45bc0c92-8817-447f-a591-d593d49d1b22\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.987665 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-combined-ca-bundle\") pod \"45bc0c92-8817-447f-a591-d593d49d1b22\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.987754 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-scripts\") pod \"45bc0c92-8817-447f-a591-d593d49d1b22\" (UID: \"45bc0c92-8817-447f-a591-d593d49d1b22\") " Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.992713 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-scripts" (OuterVolumeSpecName: "scripts") pod "45bc0c92-8817-447f-a591-d593d49d1b22" (UID: "45bc0c92-8817-447f-a591-d593d49d1b22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:38 crc kubenswrapper[4793]: I0130 14:09:38.998946 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45bc0c92-8817-447f-a591-d593d49d1b22-kube-api-access-pvphb" (OuterVolumeSpecName: "kube-api-access-pvphb") pod "45bc0c92-8817-447f-a591-d593d49d1b22" (UID: "45bc0c92-8817-447f-a591-d593d49d1b22"). InnerVolumeSpecName "kube-api-access-pvphb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.039655 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-config-data" (OuterVolumeSpecName: "config-data") pod "45bc0c92-8817-447f-a591-d593d49d1b22" (UID: "45bc0c92-8817-447f-a591-d593d49d1b22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.061116 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45bc0c92-8817-447f-a591-d593d49d1b22" (UID: "45bc0c92-8817-447f-a591-d593d49d1b22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.091203 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.091236 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvphb\" (UniqueName: \"kubernetes.io/projected/45bc0c92-8817-447f-a591-d593d49d1b22-kube-api-access-pvphb\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.091245 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.091252 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45bc0c92-8817-447f-a591-d593d49d1b22-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.381737 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.550023 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0772278-2936-43a7-b8e8-255d72a26a46","Type":"ContainerStarted","Data":"0c43fd7a19c8e62a860f534d7237c66cb3f8e183b6b7d0b236a6b8cd04692810"} Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.555739 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" event={"ID":"45bc0c92-8817-447f-a591-d593d49d1b22","Type":"ContainerDied","Data":"a679a51b0c6e6137e2ec5414eeb13b529804081ead1233b8cc65b0c2cf5027d0"} Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.555784 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a679a51b0c6e6137e2ec5414eeb13b529804081ead1233b8cc65b0c2cf5027d0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.555850 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ml6ks" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.568903 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49ed6c75-bf0d-4f2f-a470-42fd54e304da","Type":"ContainerStarted","Data":"cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f"} Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.568953 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49ed6c75-bf0d-4f2f-a470-42fd54e304da","Type":"ContainerStarted","Data":"08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04"} Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.600775 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 14:09:39 crc kubenswrapper[4793]: E0130 14:09:39.601120 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45bc0c92-8817-447f-a591-d593d49d1b22" containerName="nova-cell1-conductor-db-sync" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.601131 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="45bc0c92-8817-447f-a591-d593d49d1b22" containerName="nova-cell1-conductor-db-sync" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.601430 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="45bc0c92-8817-447f-a591-d593d49d1b22" containerName="nova-cell1-conductor-db-sync" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.623769 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.632148 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.679196 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.694833 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.694811732 podStartE2EDuration="2.694811732s" podCreationTimestamp="2026-01-30 14:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:39.600502417 +0000 UTC m=+1590.301850908" watchObservedRunningTime="2026-01-30 14:09:39.694811732 +0000 UTC m=+1590.396160233" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.703518 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2acd609-26c0-4b98-861f-a8b12fcd07bf-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.703596 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2acd609-26c0-4b98-861f-a8b12fcd07bf-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.703762 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgq7n\" (UniqueName: \"kubernetes.io/projected/d2acd609-26c0-4b98-861f-a8b12fcd07bf-kube-api-access-xgq7n\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.805770 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2acd609-26c0-4b98-861f-a8b12fcd07bf-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.805827 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2acd609-26c0-4b98-861f-a8b12fcd07bf-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.805886 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgq7n\" (UniqueName: \"kubernetes.io/projected/d2acd609-26c0-4b98-861f-a8b12fcd07bf-kube-api-access-xgq7n\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.811284 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2acd609-26c0-4b98-861f-a8b12fcd07bf-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.812426 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2acd609-26c0-4b98-861f-a8b12fcd07bf-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.821154 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgq7n\" (UniqueName: \"kubernetes.io/projected/d2acd609-26c0-4b98-861f-a8b12fcd07bf-kube-api-access-xgq7n\") pod \"nova-cell1-conductor-0\" (UID: \"d2acd609-26c0-4b98-861f-a8b12fcd07bf\") " pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:39 crc kubenswrapper[4793]: I0130 14:09:39.948699 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:40 crc kubenswrapper[4793]: I0130 14:09:40.411459 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea153b39-273a-489d-8964-8cfddfc788e1" path="/var/lib/kubelet/pods/ea153b39-273a-489d-8964-8cfddfc788e1/volumes" Jan 30 14:09:40 crc kubenswrapper[4793]: W0130 14:09:40.431378 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2acd609_26c0_4b98_861f_a8b12fcd07bf.slice/crio-0d7f6a07316a9fbb9980900056b3c6a5a645157b8a92893dec47572b136c5bc0 WatchSource:0}: Error finding container 0d7f6a07316a9fbb9980900056b3c6a5a645157b8a92893dec47572b136c5bc0: Status 404 returned error can't find the container with id 0d7f6a07316a9fbb9980900056b3c6a5a645157b8a92893dec47572b136c5bc0 Jan 30 14:09:40 crc kubenswrapper[4793]: I0130 14:09:40.432857 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 14:09:40 crc kubenswrapper[4793]: I0130 14:09:40.578784 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d2acd609-26c0-4b98-861f-a8b12fcd07bf","Type":"ContainerStarted","Data":"0d7f6a07316a9fbb9980900056b3c6a5a645157b8a92893dec47572b136c5bc0"} Jan 30 14:09:40 crc kubenswrapper[4793]: I0130 14:09:40.581155 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0772278-2936-43a7-b8e8-255d72a26a46","Type":"ContainerStarted","Data":"fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a"} Jan 30 14:09:40 crc kubenswrapper[4793]: I0130 14:09:40.602709 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.60268697 podStartE2EDuration="2.60268697s" podCreationTimestamp="2026-01-30 14:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:40.599028511 +0000 UTC m=+1591.300377012" watchObservedRunningTime="2026-01-30 14:09:40.60268697 +0000 UTC m=+1591.304035461" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.403829 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.574792 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.621945 4793 generic.go:334] "Generic (PLEG): container finished" podID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerID="b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003" exitCode=0 Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.622007 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0bc7ab8-eaab-4f40-888a-e31e12e7e773","Type":"ContainerDied","Data":"b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003"} Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.622268 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0bc7ab8-eaab-4f40-888a-e31e12e7e773","Type":"ContainerDied","Data":"0f7b1e63c6586afd494bffb3cd6108f0bd39ae0f843d930d8e6a29831d4dc1ca"} Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.622017 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.622285 4793 scope.go:117] "RemoveContainer" containerID="b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.642726 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4h9t\" (UniqueName: \"kubernetes.io/projected/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-kube-api-access-t4h9t\") pod \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.642891 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-logs\") pod \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.643016 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-combined-ca-bundle\") pod \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.643162 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-config-data\") pod \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\" (UID: \"c0bc7ab8-eaab-4f40-888a-e31e12e7e773\") " Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.643514 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-logs" (OuterVolumeSpecName: "logs") pod "c0bc7ab8-eaab-4f40-888a-e31e12e7e773" (UID: "c0bc7ab8-eaab-4f40-888a-e31e12e7e773"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.643693 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.645293 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d2acd609-26c0-4b98-861f-a8b12fcd07bf","Type":"ContainerStarted","Data":"fae27845939cb8c0afbf747f63b3a1a8d4c95dac8d7eb0b4f48c1fa2352a21a3"} Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.645332 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.654577 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-kube-api-access-t4h9t" (OuterVolumeSpecName: "kube-api-access-t4h9t") pod "c0bc7ab8-eaab-4f40-888a-e31e12e7e773" (UID: "c0bc7ab8-eaab-4f40-888a-e31e12e7e773"). InnerVolumeSpecName "kube-api-access-t4h9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.668219 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.668198006 podStartE2EDuration="2.668198006s" podCreationTimestamp="2026-01-30 14:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:41.667658564 +0000 UTC m=+1592.369007065" watchObservedRunningTime="2026-01-30 14:09:41.668198006 +0000 UTC m=+1592.369546497" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.678435 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-config-data" (OuterVolumeSpecName: "config-data") pod "c0bc7ab8-eaab-4f40-888a-e31e12e7e773" (UID: "c0bc7ab8-eaab-4f40-888a-e31e12e7e773"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.688778 4793 scope.go:117] "RemoveContainer" containerID="3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.714552 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0bc7ab8-eaab-4f40-888a-e31e12e7e773" (UID: "c0bc7ab8-eaab-4f40-888a-e31e12e7e773"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.746817 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.746867 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.746880 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4h9t\" (UniqueName: \"kubernetes.io/projected/c0bc7ab8-eaab-4f40-888a-e31e12e7e773-kube-api-access-t4h9t\") on node \"crc\" DevicePath \"\"" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.785535 4793 scope.go:117] "RemoveContainer" containerID="b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003" Jan 30 14:09:41 crc kubenswrapper[4793]: E0130 14:09:41.785989 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003\": container with ID starting with b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003 not found: ID does not exist" containerID="b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.786023 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003"} err="failed to get container status \"b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003\": rpc error: code = NotFound desc = could not find container \"b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003\": container with ID starting with b73dabbf4e9fe48c1bd69d66c2b954d0d5990c4afc90f838316ff2427e23e003 not found: ID does not exist" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.786055 4793 scope.go:117] "RemoveContainer" containerID="3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3" Jan 30 14:09:41 crc kubenswrapper[4793]: E0130 14:09:41.786410 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3\": container with ID starting with 3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3 not found: ID does not exist" containerID="3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.786437 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3"} err="failed to get container status \"3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3\": rpc error: code = NotFound desc = could not find container \"3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3\": container with ID starting with 3e218ce2019fe7c6a5f290c9270d89796a75de5bcc06c2e8dc9a4a69c7a51ab3 not found: ID does not exist" Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.971092 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:41 crc kubenswrapper[4793]: I0130 14:09:41.984203 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.002064 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:42 crc kubenswrapper[4793]: E0130 14:09:42.002835 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-log" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.002867 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-log" Jan 30 14:09:42 crc kubenswrapper[4793]: E0130 14:09:42.002890 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-api" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.002899 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-api" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.003212 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-log" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.003250 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" containerName="nova-api-api" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.004786 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.008319 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.027929 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.159320 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192f1855-5895-4928-ad91-e3bded531967-logs\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.159386 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.159462 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbbdq\" (UniqueName: \"kubernetes.io/projected/192f1855-5895-4928-ad91-e3bded531967-kube-api-access-vbbdq\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.159617 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-config-data\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.261445 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbbdq\" (UniqueName: \"kubernetes.io/projected/192f1855-5895-4928-ad91-e3bded531967-kube-api-access-vbbdq\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.261560 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-config-data\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.261659 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192f1855-5895-4928-ad91-e3bded531967-logs\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.261685 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.262695 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192f1855-5895-4928-ad91-e3bded531967-logs\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.271855 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-config-data\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.272653 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.286375 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbbdq\" (UniqueName: \"kubernetes.io/projected/192f1855-5895-4928-ad91-e3bded531967-kube-api-access-vbbdq\") pod \"nova-api-0\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.322790 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.408845 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0bc7ab8-eaab-4f40-888a-e31e12e7e773" path="/var/lib/kubelet/pods/c0bc7ab8-eaab-4f40-888a-e31e12e7e773/volumes" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.419309 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.419634 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:09:42 crc kubenswrapper[4793]: W0130 14:09:42.783647 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod192f1855_5895_4928_ad91_e3bded531967.slice/crio-a60f0efe8fb07eeb18fb57a1f165913b971130ef8a6693c2bd5863d0b6756b90 WatchSource:0}: Error finding container a60f0efe8fb07eeb18fb57a1f165913b971130ef8a6693c2bd5863d0b6756b90: Status 404 returned error can't find the container with id a60f0efe8fb07eeb18fb57a1f165913b971130ef8a6693c2bd5863d0b6756b90 Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.787580 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.931143 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 14:09:42 crc kubenswrapper[4793]: I0130 14:09:42.931381 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 14:09:43 crc kubenswrapper[4793]: I0130 14:09:43.663520 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192f1855-5895-4928-ad91-e3bded531967","Type":"ContainerStarted","Data":"a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d"} Jan 30 14:09:43 crc kubenswrapper[4793]: I0130 14:09:43.663562 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192f1855-5895-4928-ad91-e3bded531967","Type":"ContainerStarted","Data":"dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f"} Jan 30 14:09:43 crc kubenswrapper[4793]: I0130 14:09:43.663572 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192f1855-5895-4928-ad91-e3bded531967","Type":"ContainerStarted","Data":"a60f0efe8fb07eeb18fb57a1f165913b971130ef8a6693c2bd5863d0b6756b90"} Jan 30 14:09:43 crc kubenswrapper[4793]: I0130 14:09:43.688152 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.688129779 podStartE2EDuration="2.688129779s" podCreationTimestamp="2026-01-30 14:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:09:43.678874764 +0000 UTC m=+1594.380223255" watchObservedRunningTime="2026-01-30 14:09:43.688129779 +0000 UTC m=+1594.389478280" Jan 30 14:09:43 crc kubenswrapper[4793]: I0130 14:09:43.907918 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 14:09:47 crc kubenswrapper[4793]: I0130 14:09:47.930758 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 14:09:47 crc kubenswrapper[4793]: I0130 14:09:47.931170 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 14:09:48 crc kubenswrapper[4793]: I0130 14:09:48.907830 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 14:09:48 crc kubenswrapper[4793]: I0130 14:09:48.934176 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 14:09:48 crc kubenswrapper[4793]: I0130 14:09:48.943359 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:09:48 crc kubenswrapper[4793]: I0130 14:09:48.943425 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:09:49 crc kubenswrapper[4793]: I0130 14:09:49.765174 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 14:09:49 crc kubenswrapper[4793]: I0130 14:09:49.980652 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 14:09:52 crc kubenswrapper[4793]: I0130 14:09:52.324126 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 14:09:52 crc kubenswrapper[4793]: I0130 14:09:52.324492 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 14:09:53 crc kubenswrapper[4793]: I0130 14:09:53.406329 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 14:09:53 crc kubenswrapper[4793]: I0130 14:09:53.406454 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 14:09:57 crc kubenswrapper[4793]: I0130 14:09:57.937641 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 14:09:57 crc kubenswrapper[4793]: I0130 14:09:57.939593 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 14:09:57 crc kubenswrapper[4793]: I0130 14:09:57.946649 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 14:09:57 crc kubenswrapper[4793]: I0130 14:09:57.947529 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.663512 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.798767 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m4td\" (UniqueName: \"kubernetes.io/projected/946dbfc0-785c-4159-af93-83c11dd8d7e1-kube-api-access-8m4td\") pod \"946dbfc0-785c-4159-af93-83c11dd8d7e1\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.798951 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-combined-ca-bundle\") pod \"946dbfc0-785c-4159-af93-83c11dd8d7e1\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.799020 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-config-data\") pod \"946dbfc0-785c-4159-af93-83c11dd8d7e1\" (UID: \"946dbfc0-785c-4159-af93-83c11dd8d7e1\") " Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.805183 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/946dbfc0-785c-4159-af93-83c11dd8d7e1-kube-api-access-8m4td" (OuterVolumeSpecName: "kube-api-access-8m4td") pod "946dbfc0-785c-4159-af93-83c11dd8d7e1" (UID: "946dbfc0-785c-4159-af93-83c11dd8d7e1"). InnerVolumeSpecName "kube-api-access-8m4td". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.826136 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "946dbfc0-785c-4159-af93-83c11dd8d7e1" (UID: "946dbfc0-785c-4159-af93-83c11dd8d7e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.828430 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-config-data" (OuterVolumeSpecName: "config-data") pod "946dbfc0-785c-4159-af93-83c11dd8d7e1" (UID: "946dbfc0-785c-4159-af93-83c11dd8d7e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.868459 4793 generic.go:334] "Generic (PLEG): container finished" podID="946dbfc0-785c-4159-af93-83c11dd8d7e1" containerID="32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d" exitCode=137 Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.868506 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"946dbfc0-785c-4159-af93-83c11dd8d7e1","Type":"ContainerDied","Data":"32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d"} Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.868532 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"946dbfc0-785c-4159-af93-83c11dd8d7e1","Type":"ContainerDied","Data":"1fefeee02348cae466643167ff300193a0079c4a4093e5a2e4f25f3447fef7bf"} Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.868551 4793 scope.go:117] "RemoveContainer" containerID="32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.868553 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.901489 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.901523 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m4td\" (UniqueName: \"kubernetes.io/projected/946dbfc0-785c-4159-af93-83c11dd8d7e1-kube-api-access-8m4td\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.901538 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946dbfc0-785c-4159-af93-83c11dd8d7e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.915336 4793 scope.go:117] "RemoveContainer" containerID="32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d" Jan 30 14:10:00 crc kubenswrapper[4793]: E0130 14:10:00.918974 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d\": container with ID starting with 32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d not found: ID does not exist" containerID="32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.919249 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d"} err="failed to get container status \"32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d\": rpc error: code = NotFound desc = could not find container \"32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d\": container with ID starting with 32cc4d6dd43a303e26fbe67f15da72c2bde95e86e592965dd3bac8e6dfd7352d not found: ID does not exist" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.938379 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.949116 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.962753 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:10:00 crc kubenswrapper[4793]: E0130 14:10:00.963279 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946dbfc0-785c-4159-af93-83c11dd8d7e1" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.963299 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="946dbfc0-785c-4159-af93-83c11dd8d7e1" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.963530 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="946dbfc0-785c-4159-af93-83c11dd8d7e1" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.964336 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.967310 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.967562 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.967677 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 14:10:00 crc kubenswrapper[4793]: I0130 14:10:00.976563 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.105028 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.105689 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.105874 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.105988 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4vpw\" (UniqueName: \"kubernetes.io/projected/abaabb74-42dd-40b6-9cb7-69db46f235df-kube-api-access-j4vpw\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.106147 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.207798 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.207883 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.207907 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4vpw\" (UniqueName: \"kubernetes.io/projected/abaabb74-42dd-40b6-9cb7-69db46f235df-kube-api-access-j4vpw\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.207939 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.207970 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.212666 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.213913 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.215821 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.217684 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/abaabb74-42dd-40b6-9cb7-69db46f235df-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.226237 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4vpw\" (UniqueName: \"kubernetes.io/projected/abaabb74-42dd-40b6-9cb7-69db46f235df-kube-api-access-j4vpw\") pod \"nova-cell1-novncproxy-0\" (UID: \"abaabb74-42dd-40b6-9cb7-69db46f235df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.291877 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.758688 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 14:10:01 crc kubenswrapper[4793]: W0130 14:10:01.767945 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabaabb74_42dd_40b6_9cb7_69db46f235df.slice/crio-bd4aab829a3ce19952c98fef567ee92cfcfb12d99da0c93df580109c0bd9995d WatchSource:0}: Error finding container bd4aab829a3ce19952c98fef567ee92cfcfb12d99da0c93df580109c0bd9995d: Status 404 returned error can't find the container with id bd4aab829a3ce19952c98fef567ee92cfcfb12d99da0c93df580109c0bd9995d Jan 30 14:10:01 crc kubenswrapper[4793]: I0130 14:10:01.882431 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"abaabb74-42dd-40b6-9cb7-69db46f235df","Type":"ContainerStarted","Data":"bd4aab829a3ce19952c98fef567ee92cfcfb12d99da0c93df580109c0bd9995d"} Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.330223 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.331064 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.331672 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.334822 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.428938 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="946dbfc0-785c-4159-af93-83c11dd8d7e1" path="/var/lib/kubelet/pods/946dbfc0-785c-4159-af93-83c11dd8d7e1/volumes" Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.900329 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"abaabb74-42dd-40b6-9cb7-69db46f235df","Type":"ContainerStarted","Data":"96d21d4383f42ab4e78d9f1eb561cbc4de823973cf57bcc4f3433a0cf8728d8b"} Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.901216 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.903545 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 14:10:02 crc kubenswrapper[4793]: I0130 14:10:02.928376 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.928357944 podStartE2EDuration="2.928357944s" podCreationTimestamp="2026-01-30 14:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:10:02.917228824 +0000 UTC m=+1613.618577315" watchObservedRunningTime="2026-01-30 14:10:02.928357944 +0000 UTC m=+1613.629706435" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.118391 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-cxkd2"] Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.120195 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.150230 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-cxkd2"] Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.264009 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.264174 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.264300 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wjfh\" (UniqueName: \"kubernetes.io/projected/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-kube-api-access-9wjfh\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.264488 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.264648 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.264696 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-config\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.365709 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.365975 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-config\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.366025 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.366139 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.366182 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wjfh\" (UniqueName: \"kubernetes.io/projected/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-kube-api-access-9wjfh\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.366240 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.366581 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.366856 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.367293 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.367404 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.367450 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-config\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.388824 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wjfh\" (UniqueName: \"kubernetes.io/projected/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-kube-api-access-9wjfh\") pod \"dnsmasq-dns-89c5cd4d5-cxkd2\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:03 crc kubenswrapper[4793]: I0130 14:10:03.439444 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:04 crc kubenswrapper[4793]: I0130 14:10:04.571548 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-cxkd2"] Jan 30 14:10:04 crc kubenswrapper[4793]: I0130 14:10:04.919458 4793 generic.go:334] "Generic (PLEG): container finished" podID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" containerID="0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889" exitCode=0 Jan 30 14:10:04 crc kubenswrapper[4793]: I0130 14:10:04.919563 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" event={"ID":"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1","Type":"ContainerDied","Data":"0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889"} Jan 30 14:10:04 crc kubenswrapper[4793]: I0130 14:10:04.920011 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" event={"ID":"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1","Type":"ContainerStarted","Data":"78fb92af330aba5ae85ee09e8c30d31dd6612ee663286c5bea03ea04be9abef3"} Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.804608 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.948060 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.948562 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="ceilometer-central-agent" containerID="cri-o://14c5e5290d598f46c34890c9a841a85b87492f2237d89b7ffdeee5e8f99bb6c1" gracePeriod=30 Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.948815 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="proxy-httpd" containerID="cri-o://22e3b2b4f8af8c074e2701dd075aff341ca69019ed98db94c94c5c8c8fac5cc3" gracePeriod=30 Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.949070 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="ceilometer-notification-agent" containerID="cri-o://35435e31f9baea1e4c9263c0e0abafdae31a9145d621c42772e5dd4993b88a8f" gracePeriod=30 Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.949507 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="sg-core" containerID="cri-o://9d08b2914bdb19816d93c8f01afbbd1f5c6993dc4e90cc049ba23dc54276f1e5" gracePeriod=30 Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.963312 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" event={"ID":"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1","Type":"ContainerStarted","Data":"1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4"} Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.966015 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-log" containerID="cri-o://dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f" gracePeriod=30 Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.967770 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:05 crc kubenswrapper[4793]: I0130 14:10:05.967813 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-api" containerID="cri-o://a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d" gracePeriod=30 Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.008583 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" podStartSLOduration=3.008560146 podStartE2EDuration="3.008560146s" podCreationTimestamp="2026-01-30 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:10:05.989832102 +0000 UTC m=+1616.691180613" watchObservedRunningTime="2026-01-30 14:10:06.008560146 +0000 UTC m=+1616.709908647" Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.292006 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:06 crc kubenswrapper[4793]: E0130 14:10:06.471400 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d6b1bbd_8431_4e0c_882a_6ec9dee336f2.slice/crio-conmon-14c5e5290d598f46c34890c9a841a85b87492f2237d89b7ffdeee5e8f99bb6c1.scope\": RecentStats: unable to find data in memory cache]" Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.981347 4793 generic.go:334] "Generic (PLEG): container finished" podID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerID="22e3b2b4f8af8c074e2701dd075aff341ca69019ed98db94c94c5c8c8fac5cc3" exitCode=0 Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.981380 4793 generic.go:334] "Generic (PLEG): container finished" podID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerID="9d08b2914bdb19816d93c8f01afbbd1f5c6993dc4e90cc049ba23dc54276f1e5" exitCode=2 Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.981389 4793 generic.go:334] "Generic (PLEG): container finished" podID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerID="14c5e5290d598f46c34890c9a841a85b87492f2237d89b7ffdeee5e8f99bb6c1" exitCode=0 Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.981428 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerDied","Data":"22e3b2b4f8af8c074e2701dd075aff341ca69019ed98db94c94c5c8c8fac5cc3"} Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.981454 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerDied","Data":"9d08b2914bdb19816d93c8f01afbbd1f5c6993dc4e90cc049ba23dc54276f1e5"} Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.981463 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerDied","Data":"14c5e5290d598f46c34890c9a841a85b87492f2237d89b7ffdeee5e8f99bb6c1"} Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.983349 4793 generic.go:334] "Generic (PLEG): container finished" podID="192f1855-5895-4928-ad91-e3bded531967" containerID="dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f" exitCode=143 Jan 30 14:10:06 crc kubenswrapper[4793]: I0130 14:10:06.984528 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192f1855-5895-4928-ad91-e3bded531967","Type":"ContainerDied","Data":"dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f"} Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.605632 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.802556 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-config-data\") pod \"192f1855-5895-4928-ad91-e3bded531967\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.802603 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-combined-ca-bundle\") pod \"192f1855-5895-4928-ad91-e3bded531967\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.802679 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbbdq\" (UniqueName: \"kubernetes.io/projected/192f1855-5895-4928-ad91-e3bded531967-kube-api-access-vbbdq\") pod \"192f1855-5895-4928-ad91-e3bded531967\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.802790 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192f1855-5895-4928-ad91-e3bded531967-logs\") pod \"192f1855-5895-4928-ad91-e3bded531967\" (UID: \"192f1855-5895-4928-ad91-e3bded531967\") " Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.803740 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192f1855-5895-4928-ad91-e3bded531967-logs" (OuterVolumeSpecName: "logs") pod "192f1855-5895-4928-ad91-e3bded531967" (UID: "192f1855-5895-4928-ad91-e3bded531967"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.815489 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192f1855-5895-4928-ad91-e3bded531967-kube-api-access-vbbdq" (OuterVolumeSpecName: "kube-api-access-vbbdq") pod "192f1855-5895-4928-ad91-e3bded531967" (UID: "192f1855-5895-4928-ad91-e3bded531967"). InnerVolumeSpecName "kube-api-access-vbbdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.859489 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "192f1855-5895-4928-ad91-e3bded531967" (UID: "192f1855-5895-4928-ad91-e3bded531967"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.869022 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-config-data" (OuterVolumeSpecName: "config-data") pod "192f1855-5895-4928-ad91-e3bded531967" (UID: "192f1855-5895-4928-ad91-e3bded531967"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.905270 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.905302 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192f1855-5895-4928-ad91-e3bded531967-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.905314 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbbdq\" (UniqueName: \"kubernetes.io/projected/192f1855-5895-4928-ad91-e3bded531967-kube-api-access-vbbdq\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:09 crc kubenswrapper[4793]: I0130 14:10:09.905322 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192f1855-5895-4928-ad91-e3bded531967-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.024354 4793 generic.go:334] "Generic (PLEG): container finished" podID="192f1855-5895-4928-ad91-e3bded531967" containerID="a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d" exitCode=0 Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.024393 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192f1855-5895-4928-ad91-e3bded531967","Type":"ContainerDied","Data":"a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d"} Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.024440 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192f1855-5895-4928-ad91-e3bded531967","Type":"ContainerDied","Data":"a60f0efe8fb07eeb18fb57a1f165913b971130ef8a6693c2bd5863d0b6756b90"} Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.024461 4793 scope.go:117] "RemoveContainer" containerID="a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.024482 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.062702 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.066626 4793 scope.go:117] "RemoveContainer" containerID="dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.073733 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.088361 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:10 crc kubenswrapper[4793]: E0130 14:10:10.088974 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-api" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.089051 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-api" Jan 30 14:10:10 crc kubenswrapper[4793]: E0130 14:10:10.089143 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-log" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.089229 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-log" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.089543 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-log" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.089624 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="192f1855-5895-4928-ad91-e3bded531967" containerName="nova-api-api" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.090895 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.097275 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.101447 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.101467 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.113805 4793 scope.go:117] "RemoveContainer" containerID="a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d" Jan 30 14:10:10 crc kubenswrapper[4793]: E0130 14:10:10.115220 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d\": container with ID starting with a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d not found: ID does not exist" containerID="a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.115259 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d"} err="failed to get container status \"a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d\": rpc error: code = NotFound desc = could not find container \"a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d\": container with ID starting with a9ea6b736cb5559d96c2e82e1d2137a3fd95b4cfa66db4bd3c0a27a67607c37d not found: ID does not exist" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.115291 4793 scope.go:117] "RemoveContainer" containerID="dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f" Jan 30 14:10:10 crc kubenswrapper[4793]: E0130 14:10:10.116357 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f\": container with ID starting with dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f not found: ID does not exist" containerID="dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.116409 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f"} err="failed to get container status \"dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f\": rpc error: code = NotFound desc = could not find container \"dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f\": container with ID starting with dc7c9d86d848d3b5f5a89ce14c637a0ab6f2579d58b70c42daf94d2af1e5e80f not found: ID does not exist" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.120909 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.210499 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.210882 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlgzf\" (UniqueName: \"kubernetes.io/projected/61f197d5-ac2e-4907-aaaf-78ac1156368c-kube-api-access-mlgzf\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.211026 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-config-data\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.211183 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61f197d5-ac2e-4907-aaaf-78ac1156368c-logs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.211290 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-public-tls-certs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.211341 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.314574 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-public-tls-certs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.314685 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.314740 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.315075 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlgzf\" (UniqueName: \"kubernetes.io/projected/61f197d5-ac2e-4907-aaaf-78ac1156368c-kube-api-access-mlgzf\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.316365 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-config-data\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.316435 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61f197d5-ac2e-4907-aaaf-78ac1156368c-logs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.317011 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61f197d5-ac2e-4907-aaaf-78ac1156368c-logs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.318472 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.318637 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.318799 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.318994 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.328627 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.330636 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-public-tls-certs\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.332673 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-config-data\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.338617 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlgzf\" (UniqueName: \"kubernetes.io/projected/61f197d5-ac2e-4907-aaaf-78ac1156368c-kube-api-access-mlgzf\") pod \"nova-api-0\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.408941 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192f1855-5895-4928-ad91-e3bded531967" path="/var/lib/kubelet/pods/192f1855-5895-4928-ad91-e3bded531967/volumes" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.410613 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:10:10 crc kubenswrapper[4793]: I0130 14:10:10.879890 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:10 crc kubenswrapper[4793]: W0130 14:10:10.882965 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61f197d5_ac2e_4907_aaaf_78ac1156368c.slice/crio-e5af47da88468773843af7a9da670710c549d6d5e8612d43433b449ccbe8bb86 WatchSource:0}: Error finding container e5af47da88468773843af7a9da670710c549d6d5e8612d43433b449ccbe8bb86: Status 404 returned error can't find the container with id e5af47da88468773843af7a9da670710c549d6d5e8612d43433b449ccbe8bb86 Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.044515 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61f197d5-ac2e-4907-aaaf-78ac1156368c","Type":"ContainerStarted","Data":"e5af47da88468773843af7a9da670710c549d6d5e8612d43433b449ccbe8bb86"} Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.051077 4793 generic.go:334] "Generic (PLEG): container finished" podID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerID="35435e31f9baea1e4c9263c0e0abafdae31a9145d621c42772e5dd4993b88a8f" exitCode=0 Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.051109 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerDied","Data":"35435e31f9baea1e4c9263c0e0abafdae31a9145d621c42772e5dd4993b88a8f"} Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.106077 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.144111 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-run-httpd\") pod \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.144189 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-ceilometer-tls-certs\") pod \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.144253 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-combined-ca-bundle\") pod \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.144277 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-log-httpd\") pod \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.144388 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-config-data\") pod \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.144445 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss2qk\" (UniqueName: \"kubernetes.io/projected/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-kube-api-access-ss2qk\") pod \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.144503 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-scripts\") pod \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.144557 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-sg-core-conf-yaml\") pod \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\" (UID: \"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2\") " Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.146304 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" (UID: "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.146567 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" (UID: "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.155149 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-kube-api-access-ss2qk" (OuterVolumeSpecName: "kube-api-access-ss2qk") pod "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" (UID: "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2"). InnerVolumeSpecName "kube-api-access-ss2qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.167495 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-scripts" (OuterVolumeSpecName: "scripts") pod "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" (UID: "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.225728 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" (UID: "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.242516 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" (UID: "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.246567 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.246595 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.246605 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.246614 4793 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.246623 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.246631 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss2qk\" (UniqueName: \"kubernetes.io/projected/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-kube-api-access-ss2qk\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.292370 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.307790 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" (UID: "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.327147 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.350241 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.368478 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-config-data" (OuterVolumeSpecName: "config-data") pod "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" (UID: "9d6b1bbd-8431-4e0c-882a-6ec9dee336f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:11 crc kubenswrapper[4793]: I0130 14:10:11.452439 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.074712 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d6b1bbd-8431-4e0c-882a-6ec9dee336f2","Type":"ContainerDied","Data":"ee58efa07fa4fa9d8d8272dc1241f3340556be6a43a1bbd522489b6d1c064654"} Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.074767 4793 scope.go:117] "RemoveContainer" containerID="22e3b2b4f8af8c074e2701dd075aff341ca69019ed98db94c94c5c8c8fac5cc3" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.074969 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.078234 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61f197d5-ac2e-4907-aaaf-78ac1156368c","Type":"ContainerStarted","Data":"c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a"} Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.078293 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61f197d5-ac2e-4907-aaaf-78ac1156368c","Type":"ContainerStarted","Data":"9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9"} Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.099156 4793 scope.go:117] "RemoveContainer" containerID="9d08b2914bdb19816d93c8f01afbbd1f5c6993dc4e90cc049ba23dc54276f1e5" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.111621 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.123892 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.123842766 podStartE2EDuration="2.123842766s" podCreationTimestamp="2026-01-30 14:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:10:12.111983809 +0000 UTC m=+1622.813332300" watchObservedRunningTime="2026-01-30 14:10:12.123842766 +0000 UTC m=+1622.825191257" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.142615 4793 scope.go:117] "RemoveContainer" containerID="35435e31f9baea1e4c9263c0e0abafdae31a9145d621c42772e5dd4993b88a8f" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.167203 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.185987 4793 scope.go:117] "RemoveContainer" containerID="14c5e5290d598f46c34890c9a841a85b87492f2237d89b7ffdeee5e8f99bb6c1" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.188527 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.211086 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:10:12 crc kubenswrapper[4793]: E0130 14:10:12.211837 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="ceilometer-central-agent" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.211865 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="ceilometer-central-agent" Jan 30 14:10:12 crc kubenswrapper[4793]: E0130 14:10:12.211886 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="ceilometer-notification-agent" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.211896 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="ceilometer-notification-agent" Jan 30 14:10:12 crc kubenswrapper[4793]: E0130 14:10:12.211935 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="sg-core" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.211945 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="sg-core" Jan 30 14:10:12 crc kubenswrapper[4793]: E0130 14:10:12.211955 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="proxy-httpd" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.211963 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="proxy-httpd" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.212211 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="ceilometer-notification-agent" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.212251 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="sg-core" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.212262 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="proxy-httpd" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.212276 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" containerName="ceilometer-central-agent" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.214344 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.226057 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.226216 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.226473 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.259076 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.274804 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.274917 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-run-httpd\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.274952 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.275028 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-config-data\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.275050 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-log-httpd\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.275086 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.275115 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfq9p\" (UniqueName: \"kubernetes.io/projected/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-kube-api-access-lfq9p\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.275142 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-scripts\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.348459 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-mrwzs"] Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.349602 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.359748 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.361792 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.367915 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mrwzs"] Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.380937 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.381119 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-config-data\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.381197 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-log-httpd\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.381800 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-log-httpd\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.381886 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.382914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfq9p\" (UniqueName: \"kubernetes.io/projected/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-kube-api-access-lfq9p\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.382955 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-scripts\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.383010 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.383128 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-config-data\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.383206 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-scripts\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.383267 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-run-httpd\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.383308 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.384086 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-run-httpd\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.385131 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsvph\" (UniqueName: \"kubernetes.io/projected/33ed75d8-77f2-4c4d-b725-b703b8ce2980-kube-api-access-fsvph\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.397139 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-scripts\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.398585 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.399239 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-config-data\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.401655 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.402183 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.410439 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfq9p\" (UniqueName: \"kubernetes.io/projected/4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d-kube-api-access-lfq9p\") pod \"ceilometer-0\" (UID: \"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d\") " pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.413728 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.413781 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.419265 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d6b1bbd-8431-4e0c-882a-6ec9dee336f2" path="/var/lib/kubelet/pods/9d6b1bbd-8431-4e0c-882a-6ec9dee336f2/volumes" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.420360 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.421087 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.421150 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" gracePeriod=600 Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.489183 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsvph\" (UniqueName: \"kubernetes.io/projected/33ed75d8-77f2-4c4d-b725-b703b8ce2980-kube-api-access-fsvph\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.489491 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.489705 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-config-data\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.489831 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-scripts\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.502244 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.502703 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-scripts\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.504899 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-config-data\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.519703 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsvph\" (UniqueName: \"kubernetes.io/projected/33ed75d8-77f2-4c4d-b725-b703b8ce2980-kube-api-access-fsvph\") pod \"nova-cell1-cell-mapping-mrwzs\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:12 crc kubenswrapper[4793]: E0130 14:10:12.553906 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.556319 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 14:10:12 crc kubenswrapper[4793]: I0130 14:10:12.672708 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.077162 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 14:10:13 crc kubenswrapper[4793]: W0130 14:10:13.083949 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f9dd9b5_407b_47a1_91ee_5ee7a8b4816d.slice/crio-56f177460d4c30d2d717f450b291a4ba505553f8cff08ffa93d7da1245b03ba4 WatchSource:0}: Error finding container 56f177460d4c30d2d717f450b291a4ba505553f8cff08ffa93d7da1245b03ba4: Status 404 returned error can't find the container with id 56f177460d4c30d2d717f450b291a4ba505553f8cff08ffa93d7da1245b03ba4 Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.096458 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" exitCode=0 Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.096489 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70"} Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.096535 4793 scope.go:117] "RemoveContainer" containerID="f37b4adcd989135b3a0199183c5b09641f48fc83f250e8154636cac5c1ad21e6" Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.097136 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:10:13 crc kubenswrapper[4793]: E0130 14:10:13.097404 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:10:13 crc kubenswrapper[4793]: W0130 14:10:13.344511 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33ed75d8_77f2_4c4d_b725_b703b8ce2980.slice/crio-d5d05855063a2f1e60b05519ff4b4fb82e6468ce1afe8545a33be9c04136662c WatchSource:0}: Error finding container d5d05855063a2f1e60b05519ff4b4fb82e6468ce1afe8545a33be9c04136662c: Status 404 returned error can't find the container with id d5d05855063a2f1e60b05519ff4b4fb82e6468ce1afe8545a33be9c04136662c Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.347240 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mrwzs"] Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.441226 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.512517 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-n2s4l"] Jan 30 14:10:13 crc kubenswrapper[4793]: I0130 14:10:13.513267 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" podUID="1817ab34-b020-4268-b88c-126dc437c966" containerName="dnsmasq-dns" containerID="cri-o://62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad" gracePeriod=10 Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.032347 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.112883 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d","Type":"ContainerStarted","Data":"67cd78805cfd71182011eb15b3b8e8abf6d3edb3e63f79fbcc6bba28ee33409f"} Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.112922 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d","Type":"ContainerStarted","Data":"56f177460d4c30d2d717f450b291a4ba505553f8cff08ffa93d7da1245b03ba4"} Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.114807 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mrwzs" event={"ID":"33ed75d8-77f2-4c4d-b725-b703b8ce2980","Type":"ContainerStarted","Data":"596a656189ddb8dd9803e2c0c8dc2a8724dea1aee86c92cab0644fce8e091c80"} Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.114849 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mrwzs" event={"ID":"33ed75d8-77f2-4c4d-b725-b703b8ce2980","Type":"ContainerStarted","Data":"d5d05855063a2f1e60b05519ff4b4fb82e6468ce1afe8545a33be9c04136662c"} Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.116848 4793 generic.go:334] "Generic (PLEG): container finished" podID="1817ab34-b020-4268-b88c-126dc437c966" containerID="62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad" exitCode=0 Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.116903 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" event={"ID":"1817ab34-b020-4268-b88c-126dc437c966","Type":"ContainerDied","Data":"62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad"} Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.116930 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" event={"ID":"1817ab34-b020-4268-b88c-126dc437c966","Type":"ContainerDied","Data":"51b9f220023c2df2b6b701ab065f62d75d5f6cee33ff2d1780a9cb8c10fdb12d"} Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.116949 4793 scope.go:117] "RemoveContainer" containerID="62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.117149 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-n2s4l" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.124330 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-config\") pod \"1817ab34-b020-4268-b88c-126dc437c966\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.124410 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-svc\") pod \"1817ab34-b020-4268-b88c-126dc437c966\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.124436 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-nb\") pod \"1817ab34-b020-4268-b88c-126dc437c966\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.124501 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-swift-storage-0\") pod \"1817ab34-b020-4268-b88c-126dc437c966\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.124581 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-sb\") pod \"1817ab34-b020-4268-b88c-126dc437c966\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.124673 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj6mz\" (UniqueName: \"kubernetes.io/projected/1817ab34-b020-4268-b88c-126dc437c966-kube-api-access-nj6mz\") pod \"1817ab34-b020-4268-b88c-126dc437c966\" (UID: \"1817ab34-b020-4268-b88c-126dc437c966\") " Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.137762 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1817ab34-b020-4268-b88c-126dc437c966-kube-api-access-nj6mz" (OuterVolumeSpecName: "kube-api-access-nj6mz") pod "1817ab34-b020-4268-b88c-126dc437c966" (UID: "1817ab34-b020-4268-b88c-126dc437c966"). InnerVolumeSpecName "kube-api-access-nj6mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.152707 4793 scope.go:117] "RemoveContainer" containerID="7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.154772 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-mrwzs" podStartSLOduration=2.154746314 podStartE2EDuration="2.154746314s" podCreationTimestamp="2026-01-30 14:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:10:14.143203804 +0000 UTC m=+1624.844552315" watchObservedRunningTime="2026-01-30 14:10:14.154746314 +0000 UTC m=+1624.856094805" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.185927 4793 scope.go:117] "RemoveContainer" containerID="62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad" Jan 30 14:10:14 crc kubenswrapper[4793]: E0130 14:10:14.186591 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad\": container with ID starting with 62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad not found: ID does not exist" containerID="62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.186744 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad"} err="failed to get container status \"62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad\": rpc error: code = NotFound desc = could not find container \"62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad\": container with ID starting with 62e49be9fe2ea777e83b78909168c8bc68c1e823073015b621a2c5cc7b2729ad not found: ID does not exist" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.186842 4793 scope.go:117] "RemoveContainer" containerID="7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b" Jan 30 14:10:14 crc kubenswrapper[4793]: E0130 14:10:14.187260 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b\": container with ID starting with 7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b not found: ID does not exist" containerID="7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.187350 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b"} err="failed to get container status \"7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b\": rpc error: code = NotFound desc = could not find container \"7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b\": container with ID starting with 7dedcd21fc90afbf2c7d4080478b808ca02e334ee0a5ff06ba8fbb9dab51b13b not found: ID does not exist" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.198765 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-config" (OuterVolumeSpecName: "config") pod "1817ab34-b020-4268-b88c-126dc437c966" (UID: "1817ab34-b020-4268-b88c-126dc437c966"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.201588 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1817ab34-b020-4268-b88c-126dc437c966" (UID: "1817ab34-b020-4268-b88c-126dc437c966"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.203642 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1817ab34-b020-4268-b88c-126dc437c966" (UID: "1817ab34-b020-4268-b88c-126dc437c966"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.221550 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1817ab34-b020-4268-b88c-126dc437c966" (UID: "1817ab34-b020-4268-b88c-126dc437c966"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.230929 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.230965 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.230981 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj6mz\" (UniqueName: \"kubernetes.io/projected/1817ab34-b020-4268-b88c-126dc437c966-kube-api-access-nj6mz\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.230990 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.230999 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.257714 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1817ab34-b020-4268-b88c-126dc437c966" (UID: "1817ab34-b020-4268-b88c-126dc437c966"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.332263 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1817ab34-b020-4268-b88c-126dc437c966-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.565172 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-n2s4l"] Jan 30 14:10:14 crc kubenswrapper[4793]: I0130 14:10:14.580567 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-n2s4l"] Jan 30 14:10:15 crc kubenswrapper[4793]: I0130 14:10:15.129788 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d","Type":"ContainerStarted","Data":"b9118352f798bed71e82ce4b518d07c49a400170692d3a7bebe81a94dcc220cb"} Jan 30 14:10:16 crc kubenswrapper[4793]: I0130 14:10:16.523998 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1817ab34-b020-4268-b88c-126dc437c966" path="/var/lib/kubelet/pods/1817ab34-b020-4268-b88c-126dc437c966/volumes" Jan 30 14:10:16 crc kubenswrapper[4793]: I0130 14:10:16.526455 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d","Type":"ContainerStarted","Data":"d49c9d2a1050f5ed243c6b3a7b6b86330cedaed1d8a0565394963de272b03130"} Jan 30 14:10:19 crc kubenswrapper[4793]: I0130 14:10:19.557964 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d","Type":"ContainerStarted","Data":"efc7d228e44e2727aabe5ea1aba8c086103d815b77e7b65c5e18fc1aa1831899"} Jan 30 14:10:19 crc kubenswrapper[4793]: I0130 14:10:19.559174 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 14:10:19 crc kubenswrapper[4793]: I0130 14:10:19.596027 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.161094779 podStartE2EDuration="7.596007711s" podCreationTimestamp="2026-01-30 14:10:12 +0000 UTC" firstStartedPulling="2026-01-30 14:10:13.087208749 +0000 UTC m=+1623.788557240" lastFinishedPulling="2026-01-30 14:10:18.522121681 +0000 UTC m=+1629.223470172" observedRunningTime="2026-01-30 14:10:19.589428831 +0000 UTC m=+1630.290777342" watchObservedRunningTime="2026-01-30 14:10:19.596007711 +0000 UTC m=+1630.297356202" Jan 30 14:10:20 crc kubenswrapper[4793]: I0130 14:10:20.413878 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 14:10:20 crc kubenswrapper[4793]: I0130 14:10:20.414207 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 14:10:20 crc kubenswrapper[4793]: I0130 14:10:20.571426 4793 generic.go:334] "Generic (PLEG): container finished" podID="33ed75d8-77f2-4c4d-b725-b703b8ce2980" containerID="596a656189ddb8dd9803e2c0c8dc2a8724dea1aee86c92cab0644fce8e091c80" exitCode=0 Jan 30 14:10:20 crc kubenswrapper[4793]: I0130 14:10:20.572708 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mrwzs" event={"ID":"33ed75d8-77f2-4c4d-b725-b703b8ce2980","Type":"ContainerDied","Data":"596a656189ddb8dd9803e2c0c8dc2a8724dea1aee86c92cab0644fce8e091c80"} Jan 30 14:10:21 crc kubenswrapper[4793]: I0130 14:10:21.431366 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:10:21 crc kubenswrapper[4793]: I0130 14:10:21.431396 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 14:10:21 crc kubenswrapper[4793]: I0130 14:10:21.969932 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.088685 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-config-data\") pod \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.088790 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-combined-ca-bundle\") pod \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.088965 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsvph\" (UniqueName: \"kubernetes.io/projected/33ed75d8-77f2-4c4d-b725-b703b8ce2980-kube-api-access-fsvph\") pod \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.089013 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-scripts\") pod \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\" (UID: \"33ed75d8-77f2-4c4d-b725-b703b8ce2980\") " Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.094151 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-scripts" (OuterVolumeSpecName: "scripts") pod "33ed75d8-77f2-4c4d-b725-b703b8ce2980" (UID: "33ed75d8-77f2-4c4d-b725-b703b8ce2980"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.094243 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ed75d8-77f2-4c4d-b725-b703b8ce2980-kube-api-access-fsvph" (OuterVolumeSpecName: "kube-api-access-fsvph") pod "33ed75d8-77f2-4c4d-b725-b703b8ce2980" (UID: "33ed75d8-77f2-4c4d-b725-b703b8ce2980"). InnerVolumeSpecName "kube-api-access-fsvph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.117148 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-config-data" (OuterVolumeSpecName: "config-data") pod "33ed75d8-77f2-4c4d-b725-b703b8ce2980" (UID: "33ed75d8-77f2-4c4d-b725-b703b8ce2980"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.122257 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33ed75d8-77f2-4c4d-b725-b703b8ce2980" (UID: "33ed75d8-77f2-4c4d-b725-b703b8ce2980"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.191970 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.192015 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsvph\" (UniqueName: \"kubernetes.io/projected/33ed75d8-77f2-4c4d-b725-b703b8ce2980-kube-api-access-fsvph\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.192032 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.192068 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ed75d8-77f2-4c4d-b725-b703b8ce2980-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.614648 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mrwzs" event={"ID":"33ed75d8-77f2-4c4d-b725-b703b8ce2980","Type":"ContainerDied","Data":"d5d05855063a2f1e60b05519ff4b4fb82e6468ce1afe8545a33be9c04136662c"} Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.614929 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5d05855063a2f1e60b05519ff4b4fb82e6468ce1afe8545a33be9c04136662c" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.615087 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mrwzs" Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.807810 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.808114 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b0772278-2936-43a7-b8e8-255d72a26a46" containerName="nova-scheduler-scheduler" containerID="cri-o://fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a" gracePeriod=30 Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.822513 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.822924 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-log" containerID="cri-o://9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9" gracePeriod=30 Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.823586 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-api" containerID="cri-o://c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a" gracePeriod=30 Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.838524 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.838802 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-log" containerID="cri-o://08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04" gracePeriod=30 Jan 30 14:10:22 crc kubenswrapper[4793]: I0130 14:10:22.840745 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-metadata" containerID="cri-o://cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f" gracePeriod=30 Jan 30 14:10:23 crc kubenswrapper[4793]: I0130 14:10:23.624406 4793 generic.go:334] "Generic (PLEG): container finished" podID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerID="08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04" exitCode=143 Jan 30 14:10:23 crc kubenswrapper[4793]: I0130 14:10:23.624549 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49ed6c75-bf0d-4f2f-a470-42fd54e304da","Type":"ContainerDied","Data":"08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04"} Jan 30 14:10:23 crc kubenswrapper[4793]: I0130 14:10:23.626880 4793 generic.go:334] "Generic (PLEG): container finished" podID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerID="9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9" exitCode=143 Jan 30 14:10:23 crc kubenswrapper[4793]: I0130 14:10:23.626924 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61f197d5-ac2e-4907-aaaf-78ac1156368c","Type":"ContainerDied","Data":"9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9"} Jan 30 14:10:23 crc kubenswrapper[4793]: E0130 14:10:23.909746 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 14:10:23 crc kubenswrapper[4793]: E0130 14:10:23.910969 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 14:10:23 crc kubenswrapper[4793]: E0130 14:10:23.912430 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 14:10:23 crc kubenswrapper[4793]: E0130 14:10:23.912467 4793 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b0772278-2936-43a7-b8e8-255d72a26a46" containerName="nova-scheduler-scheduler" Jan 30 14:10:24 crc kubenswrapper[4793]: I0130 14:10:24.398793 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:10:24 crc kubenswrapper[4793]: E0130 14:10:24.399081 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:10:25 crc kubenswrapper[4793]: I0130 14:10:25.999473 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:52316->10.217.0.194:8775: read: connection reset by peer" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:25.999529 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:52314->10.217.0.194:8775: read: connection reset by peer" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.463952 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.596561 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49ed6c75-bf0d-4f2f-a470-42fd54e304da-logs\") pod \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.597109 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kzp9\" (UniqueName: \"kubernetes.io/projected/49ed6c75-bf0d-4f2f-a470-42fd54e304da-kube-api-access-7kzp9\") pod \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.597230 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-nova-metadata-tls-certs\") pod \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.597295 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-combined-ca-bundle\") pod \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.597329 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-config-data\") pod \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\" (UID: \"49ed6c75-bf0d-4f2f-a470-42fd54e304da\") " Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.604764 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49ed6c75-bf0d-4f2f-a470-42fd54e304da-logs" (OuterVolumeSpecName: "logs") pod "49ed6c75-bf0d-4f2f-a470-42fd54e304da" (UID: "49ed6c75-bf0d-4f2f-a470-42fd54e304da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.632364 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ed6c75-bf0d-4f2f-a470-42fd54e304da-kube-api-access-7kzp9" (OuterVolumeSpecName: "kube-api-access-7kzp9") pod "49ed6c75-bf0d-4f2f-a470-42fd54e304da" (UID: "49ed6c75-bf0d-4f2f-a470-42fd54e304da"). InnerVolumeSpecName "kube-api-access-7kzp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.665714 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-config-data" (OuterVolumeSpecName: "config-data") pod "49ed6c75-bf0d-4f2f-a470-42fd54e304da" (UID: "49ed6c75-bf0d-4f2f-a470-42fd54e304da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.671618 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49ed6c75-bf0d-4f2f-a470-42fd54e304da" (UID: "49ed6c75-bf0d-4f2f-a470-42fd54e304da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.679403 4793 generic.go:334] "Generic (PLEG): container finished" podID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerID="cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f" exitCode=0 Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.679450 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49ed6c75-bf0d-4f2f-a470-42fd54e304da","Type":"ContainerDied","Data":"cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f"} Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.679478 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49ed6c75-bf0d-4f2f-a470-42fd54e304da","Type":"ContainerDied","Data":"8e827d18d94a36e1032ee13a7b09882361977c3cc27e172ae22dfb68a0554721"} Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.679496 4793 scope.go:117] "RemoveContainer" containerID="cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.679502 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.687219 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "49ed6c75-bf0d-4f2f-a470-42fd54e304da" (UID: "49ed6c75-bf0d-4f2f-a470-42fd54e304da"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.699892 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49ed6c75-bf0d-4f2f-a470-42fd54e304da-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.699918 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kzp9\" (UniqueName: \"kubernetes.io/projected/49ed6c75-bf0d-4f2f-a470-42fd54e304da-kube-api-access-7kzp9\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.699929 4793 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.699937 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.699946 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ed6c75-bf0d-4f2f-a470-42fd54e304da-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.739571 4793 scope.go:117] "RemoveContainer" containerID="08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.762123 4793 scope.go:117] "RemoveContainer" containerID="cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f" Jan 30 14:10:26 crc kubenswrapper[4793]: E0130 14:10:26.765491 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f\": container with ID starting with cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f not found: ID does not exist" containerID="cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.765555 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f"} err="failed to get container status \"cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f\": rpc error: code = NotFound desc = could not find container \"cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f\": container with ID starting with cef681fcba5d7cd1b55924a221fd300a2e4054308ac0efc9d5314d315bbe6f2f not found: ID does not exist" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.765589 4793 scope.go:117] "RemoveContainer" containerID="08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04" Jan 30 14:10:26 crc kubenswrapper[4793]: E0130 14:10:26.766168 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04\": container with ID starting with 08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04 not found: ID does not exist" containerID="08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04" Jan 30 14:10:26 crc kubenswrapper[4793]: I0130 14:10:26.766223 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04"} err="failed to get container status \"08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04\": rpc error: code = NotFound desc = could not find container \"08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04\": container with ID starting with 08cc10887a12753e4db59080359e594580bb48ed03e0b40fc76159723ed11d04 not found: ID does not exist" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.024208 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.040102 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.056338 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:10:27 crc kubenswrapper[4793]: E0130 14:10:27.056844 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ed75d8-77f2-4c4d-b725-b703b8ce2980" containerName="nova-manage" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.056866 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ed75d8-77f2-4c4d-b725-b703b8ce2980" containerName="nova-manage" Jan 30 14:10:27 crc kubenswrapper[4793]: E0130 14:10:27.056885 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-metadata" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.056893 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-metadata" Jan 30 14:10:27 crc kubenswrapper[4793]: E0130 14:10:27.056919 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1817ab34-b020-4268-b88c-126dc437c966" containerName="dnsmasq-dns" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.056928 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1817ab34-b020-4268-b88c-126dc437c966" containerName="dnsmasq-dns" Jan 30 14:10:27 crc kubenswrapper[4793]: E0130 14:10:27.056944 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-log" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.056951 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-log" Jan 30 14:10:27 crc kubenswrapper[4793]: E0130 14:10:27.056966 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1817ab34-b020-4268-b88c-126dc437c966" containerName="init" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.056973 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1817ab34-b020-4268-b88c-126dc437c966" containerName="init" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.057203 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="33ed75d8-77f2-4c4d-b725-b703b8ce2980" containerName="nova-manage" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.057237 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-log" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.057247 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1817ab34-b020-4268-b88c-126dc437c966" containerName="dnsmasq-dns" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.057262 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" containerName="nova-metadata-metadata" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.058510 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.065498 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.065776 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.069770 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.111813 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02223b96-2b8b-4d32-b7ba-9cb517e03f13-logs\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.111964 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.112077 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptjl2\" (UniqueName: \"kubernetes.io/projected/02223b96-2b8b-4d32-b7ba-9cb517e03f13-kube-api-access-ptjl2\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.112103 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.112159 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-config-data\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.213841 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-config-data\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.214589 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02223b96-2b8b-4d32-b7ba-9cb517e03f13-logs\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.214815 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.214940 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02223b96-2b8b-4d32-b7ba-9cb517e03f13-logs\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.215028 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.215140 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptjl2\" (UniqueName: \"kubernetes.io/projected/02223b96-2b8b-4d32-b7ba-9cb517e03f13-kube-api-access-ptjl2\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.220371 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.225896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-config-data\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.226902 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02223b96-2b8b-4d32-b7ba-9cb517e03f13-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.231434 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptjl2\" (UniqueName: \"kubernetes.io/projected/02223b96-2b8b-4d32-b7ba-9cb517e03f13-kube-api-access-ptjl2\") pod \"nova-metadata-0\" (UID: \"02223b96-2b8b-4d32-b7ba-9cb517e03f13\") " pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.428869 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.629162 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.689475 4793 generic.go:334] "Generic (PLEG): container finished" podID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerID="c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a" exitCode=0 Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.689534 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61f197d5-ac2e-4907-aaaf-78ac1156368c","Type":"ContainerDied","Data":"c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a"} Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.689561 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61f197d5-ac2e-4907-aaaf-78ac1156368c","Type":"ContainerDied","Data":"e5af47da88468773843af7a9da670710c549d6d5e8612d43433b449ccbe8bb86"} Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.689579 4793 scope.go:117] "RemoveContainer" containerID="c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.689676 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.719143 4793 scope.go:117] "RemoveContainer" containerID="9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.728542 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61f197d5-ac2e-4907-aaaf-78ac1156368c-logs\") pod \"61f197d5-ac2e-4907-aaaf-78ac1156368c\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.728583 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-combined-ca-bundle\") pod \"61f197d5-ac2e-4907-aaaf-78ac1156368c\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.728644 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-public-tls-certs\") pod \"61f197d5-ac2e-4907-aaaf-78ac1156368c\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.728677 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlgzf\" (UniqueName: \"kubernetes.io/projected/61f197d5-ac2e-4907-aaaf-78ac1156368c-kube-api-access-mlgzf\") pod \"61f197d5-ac2e-4907-aaaf-78ac1156368c\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.728724 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-config-data\") pod \"61f197d5-ac2e-4907-aaaf-78ac1156368c\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.728810 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-internal-tls-certs\") pod \"61f197d5-ac2e-4907-aaaf-78ac1156368c\" (UID: \"61f197d5-ac2e-4907-aaaf-78ac1156368c\") " Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.729650 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61f197d5-ac2e-4907-aaaf-78ac1156368c-logs" (OuterVolumeSpecName: "logs") pod "61f197d5-ac2e-4907-aaaf-78ac1156368c" (UID: "61f197d5-ac2e-4907-aaaf-78ac1156368c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.739965 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f197d5-ac2e-4907-aaaf-78ac1156368c-kube-api-access-mlgzf" (OuterVolumeSpecName: "kube-api-access-mlgzf") pod "61f197d5-ac2e-4907-aaaf-78ac1156368c" (UID: "61f197d5-ac2e-4907-aaaf-78ac1156368c"). InnerVolumeSpecName "kube-api-access-mlgzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.754241 4793 scope.go:117] "RemoveContainer" containerID="c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a" Jan 30 14:10:27 crc kubenswrapper[4793]: E0130 14:10:27.754983 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a\": container with ID starting with c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a not found: ID does not exist" containerID="c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.755034 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a"} err="failed to get container status \"c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a\": rpc error: code = NotFound desc = could not find container \"c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a\": container with ID starting with c515f8633e55698590becb2fb57871a768e4909bc91d90a742f2562e086aee5a not found: ID does not exist" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.755068 4793 scope.go:117] "RemoveContainer" containerID="9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9" Jan 30 14:10:27 crc kubenswrapper[4793]: E0130 14:10:27.755535 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9\": container with ID starting with 9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9 not found: ID does not exist" containerID="9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.755559 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9"} err="failed to get container status \"9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9\": rpc error: code = NotFound desc = could not find container \"9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9\": container with ID starting with 9ecca1a763c77b3be0cbd4d8982d889f2d639ba74ffb095a0903cebd243464a9 not found: ID does not exist" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.758027 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-config-data" (OuterVolumeSpecName: "config-data") pod "61f197d5-ac2e-4907-aaaf-78ac1156368c" (UID: "61f197d5-ac2e-4907-aaaf-78ac1156368c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.760179 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61f197d5-ac2e-4907-aaaf-78ac1156368c" (UID: "61f197d5-ac2e-4907-aaaf-78ac1156368c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.792024 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "61f197d5-ac2e-4907-aaaf-78ac1156368c" (UID: "61f197d5-ac2e-4907-aaaf-78ac1156368c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.797937 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "61f197d5-ac2e-4907-aaaf-78ac1156368c" (UID: "61f197d5-ac2e-4907-aaaf-78ac1156368c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.830776 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61f197d5-ac2e-4907-aaaf-78ac1156368c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.830817 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.830843 4793 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.830857 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlgzf\" (UniqueName: \"kubernetes.io/projected/61f197d5-ac2e-4907-aaaf-78ac1156368c-kube-api-access-mlgzf\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.830869 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.830877 4793 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f197d5-ac2e-4907-aaaf-78ac1156368c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:27 crc kubenswrapper[4793]: I0130 14:10:27.937964 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 14:10:27 crc kubenswrapper[4793]: W0130 14:10:27.943982 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02223b96_2b8b_4d32_b7ba_9cb517e03f13.slice/crio-47d3c6ee13331f5692f6d6bda16293a43f64ff62abadf9696460b0dff80e4660 WatchSource:0}: Error finding container 47d3c6ee13331f5692f6d6bda16293a43f64ff62abadf9696460b0dff80e4660: Status 404 returned error can't find the container with id 47d3c6ee13331f5692f6d6bda16293a43f64ff62abadf9696460b0dff80e4660 Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.082969 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.097066 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.117218 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:28 crc kubenswrapper[4793]: E0130 14:10:28.117779 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-api" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.117804 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-api" Jan 30 14:10:28 crc kubenswrapper[4793]: E0130 14:10:28.117848 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-log" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.117857 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-log" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.118123 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-log" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.118161 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" containerName="nova-api-api" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.119480 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.124150 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.124235 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.124370 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.126567 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.135552 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-config-data\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.135792 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-logs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.135911 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.136011 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9ddc\" (UniqueName: \"kubernetes.io/projected/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-kube-api-access-w9ddc\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.136087 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.136207 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.239895 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.239946 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-config-data\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.239983 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-logs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.240043 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.240080 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9ddc\" (UniqueName: \"kubernetes.io/projected/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-kube-api-access-w9ddc\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.240098 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.240974 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-logs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.248378 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.248419 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.250666 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-config-data\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.261100 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9ddc\" (UniqueName: \"kubernetes.io/projected/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-kube-api-access-w9ddc\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.261520 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4991f7-e6e6-4dfd-a75b-25a7506591e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b4991f7-e6e6-4dfd-a75b-25a7506591e1\") " pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.411260 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ed6c75-bf0d-4f2f-a470-42fd54e304da" path="/var/lib/kubelet/pods/49ed6c75-bf0d-4f2f-a470-42fd54e304da/volumes" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.413145 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61f197d5-ac2e-4907-aaaf-78ac1156368c" path="/var/lib/kubelet/pods/61f197d5-ac2e-4907-aaaf-78ac1156368c/volumes" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.434675 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.478342 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.545724 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7x6x\" (UniqueName: \"kubernetes.io/projected/b0772278-2936-43a7-b8e8-255d72a26a46-kube-api-access-r7x6x\") pod \"b0772278-2936-43a7-b8e8-255d72a26a46\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.545778 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-config-data\") pod \"b0772278-2936-43a7-b8e8-255d72a26a46\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.546451 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-combined-ca-bundle\") pod \"b0772278-2936-43a7-b8e8-255d72a26a46\" (UID: \"b0772278-2936-43a7-b8e8-255d72a26a46\") " Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.571346 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0772278-2936-43a7-b8e8-255d72a26a46-kube-api-access-r7x6x" (OuterVolumeSpecName: "kube-api-access-r7x6x") pod "b0772278-2936-43a7-b8e8-255d72a26a46" (UID: "b0772278-2936-43a7-b8e8-255d72a26a46"). InnerVolumeSpecName "kube-api-access-r7x6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.588174 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-config-data" (OuterVolumeSpecName: "config-data") pod "b0772278-2936-43a7-b8e8-255d72a26a46" (UID: "b0772278-2936-43a7-b8e8-255d72a26a46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.610428 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0772278-2936-43a7-b8e8-255d72a26a46" (UID: "b0772278-2936-43a7-b8e8-255d72a26a46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.649806 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.649848 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7x6x\" (UniqueName: \"kubernetes.io/projected/b0772278-2936-43a7-b8e8-255d72a26a46-kube-api-access-r7x6x\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.649865 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0772278-2936-43a7-b8e8-255d72a26a46-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.712210 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"02223b96-2b8b-4d32-b7ba-9cb517e03f13","Type":"ContainerStarted","Data":"b5332e1f855d542a3aec1e3972120fafd4540f19940a9b97a1d6286167ac2d00"} Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.712252 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"02223b96-2b8b-4d32-b7ba-9cb517e03f13","Type":"ContainerStarted","Data":"ff9fb94535fef65e311e19c7b9311a348c9264d1affd60b0bc5d3319b07a49e9"} Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.712261 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"02223b96-2b8b-4d32-b7ba-9cb517e03f13","Type":"ContainerStarted","Data":"47d3c6ee13331f5692f6d6bda16293a43f64ff62abadf9696460b0dff80e4660"} Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.717770 4793 generic.go:334] "Generic (PLEG): container finished" podID="b0772278-2936-43a7-b8e8-255d72a26a46" containerID="fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a" exitCode=0 Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.717850 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0772278-2936-43a7-b8e8-255d72a26a46","Type":"ContainerDied","Data":"fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a"} Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.717878 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0772278-2936-43a7-b8e8-255d72a26a46","Type":"ContainerDied","Data":"0c43fd7a19c8e62a860f534d7237c66cb3f8e183b6b7d0b236a6b8cd04692810"} Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.717898 4793 scope.go:117] "RemoveContainer" containerID="fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.718038 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.748574 4793 scope.go:117] "RemoveContainer" containerID="fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a" Jan 30 14:10:28 crc kubenswrapper[4793]: E0130 14:10:28.750261 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a\": container with ID starting with fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a not found: ID does not exist" containerID="fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.750292 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a"} err="failed to get container status \"fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a\": rpc error: code = NotFound desc = could not find container \"fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a\": container with ID starting with fefd10c586d13efcf40a90a4b2b1ba972aacef577deae68e1fb1307ea6c8d97a not found: ID does not exist" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.750606 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.750580722 podStartE2EDuration="1.750580722s" podCreationTimestamp="2026-01-30 14:10:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:10:28.743736646 +0000 UTC m=+1639.445085137" watchObservedRunningTime="2026-01-30 14:10:28.750580722 +0000 UTC m=+1639.451929233" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.770640 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.790503 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.804641 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:10:28 crc kubenswrapper[4793]: E0130 14:10:28.805353 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0772278-2936-43a7-b8e8-255d72a26a46" containerName="nova-scheduler-scheduler" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.805366 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0772278-2936-43a7-b8e8-255d72a26a46" containerName="nova-scheduler-scheduler" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.805554 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0772278-2936-43a7-b8e8-255d72a26a46" containerName="nova-scheduler-scheduler" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.806450 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.813033 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.834518 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.852574 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mvc2\" (UniqueName: \"kubernetes.io/projected/9e04e820-112a-4afa-b908-f9b8be3e9e7c-kube-api-access-9mvc2\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.852659 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e04e820-112a-4afa-b908-f9b8be3e9e7c-config-data\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.852726 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e04e820-112a-4afa-b908-f9b8be3e9e7c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.954059 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mvc2\" (UniqueName: \"kubernetes.io/projected/9e04e820-112a-4afa-b908-f9b8be3e9e7c-kube-api-access-9mvc2\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.954152 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e04e820-112a-4afa-b908-f9b8be3e9e7c-config-data\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.954221 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e04e820-112a-4afa-b908-f9b8be3e9e7c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.959565 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e04e820-112a-4afa-b908-f9b8be3e9e7c-config-data\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.959600 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e04e820-112a-4afa-b908-f9b8be3e9e7c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.972241 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mvc2\" (UniqueName: \"kubernetes.io/projected/9e04e820-112a-4afa-b908-f9b8be3e9e7c-kube-api-access-9mvc2\") pod \"nova-scheduler-0\" (UID: \"9e04e820-112a-4afa-b908-f9b8be3e9e7c\") " pod="openstack/nova-scheduler-0" Jan 30 14:10:28 crc kubenswrapper[4793]: W0130 14:10:28.989776 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b4991f7_e6e6_4dfd_a75b_25a7506591e1.slice/crio-d921198f65da1edba7ae4c7525167b4c85f3f6c55c0489270c831ae20a548f2e WatchSource:0}: Error finding container d921198f65da1edba7ae4c7525167b4c85f3f6c55c0489270c831ae20a548f2e: Status 404 returned error can't find the container with id d921198f65da1edba7ae4c7525167b4c85f3f6c55c0489270c831ae20a548f2e Jan 30 14:10:28 crc kubenswrapper[4793]: I0130 14:10:28.990157 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 14:10:29 crc kubenswrapper[4793]: I0130 14:10:29.134618 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 14:10:29 crc kubenswrapper[4793]: I0130 14:10:29.602416 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 14:10:29 crc kubenswrapper[4793]: W0130 14:10:29.606611 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e04e820_112a_4afa_b908_f9b8be3e9e7c.slice/crio-b762b3c3e68b9152633fdaa88266289c1d0db7cbd50bef1d3b9594f5bf9ad7dc WatchSource:0}: Error finding container b762b3c3e68b9152633fdaa88266289c1d0db7cbd50bef1d3b9594f5bf9ad7dc: Status 404 returned error can't find the container with id b762b3c3e68b9152633fdaa88266289c1d0db7cbd50bef1d3b9594f5bf9ad7dc Jan 30 14:10:29 crc kubenswrapper[4793]: I0130 14:10:29.733716 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9e04e820-112a-4afa-b908-f9b8be3e9e7c","Type":"ContainerStarted","Data":"b762b3c3e68b9152633fdaa88266289c1d0db7cbd50bef1d3b9594f5bf9ad7dc"} Jan 30 14:10:29 crc kubenswrapper[4793]: I0130 14:10:29.735569 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b4991f7-e6e6-4dfd-a75b-25a7506591e1","Type":"ContainerStarted","Data":"87246b291ffab77db78cc65ecd8c0fd944c2bd447077a37a61c96e2ab8c54184"} Jan 30 14:10:29 crc kubenswrapper[4793]: I0130 14:10:29.735600 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b4991f7-e6e6-4dfd-a75b-25a7506591e1","Type":"ContainerStarted","Data":"89cb391b4339b9ea2b2f0ba87faab6ade18019ef0fd9cfb5a91677f13cadc744"} Jan 30 14:10:29 crc kubenswrapper[4793]: I0130 14:10:29.735615 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b4991f7-e6e6-4dfd-a75b-25a7506591e1","Type":"ContainerStarted","Data":"d921198f65da1edba7ae4c7525167b4c85f3f6c55c0489270c831ae20a548f2e"} Jan 30 14:10:29 crc kubenswrapper[4793]: I0130 14:10:29.753077 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.752264762 podStartE2EDuration="1.752264762s" podCreationTimestamp="2026-01-30 14:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:10:29.751126944 +0000 UTC m=+1640.452475465" watchObservedRunningTime="2026-01-30 14:10:29.752264762 +0000 UTC m=+1640.453613253" Jan 30 14:10:30 crc kubenswrapper[4793]: I0130 14:10:30.410021 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0772278-2936-43a7-b8e8-255d72a26a46" path="/var/lib/kubelet/pods/b0772278-2936-43a7-b8e8-255d72a26a46/volumes" Jan 30 14:10:30 crc kubenswrapper[4793]: I0130 14:10:30.745815 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9e04e820-112a-4afa-b908-f9b8be3e9e7c","Type":"ContainerStarted","Data":"5fa98a9f2da8132b5f12402c1cbcf5b1d9acbf355abda26521806509c5c1864c"} Jan 30 14:10:30 crc kubenswrapper[4793]: I0130 14:10:30.768650 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.768625208 podStartE2EDuration="2.768625208s" podCreationTimestamp="2026-01-30 14:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:10:30.760583083 +0000 UTC m=+1641.461931584" watchObservedRunningTime="2026-01-30 14:10:30.768625208 +0000 UTC m=+1641.469973699" Jan 30 14:10:32 crc kubenswrapper[4793]: I0130 14:10:32.429815 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 14:10:32 crc kubenswrapper[4793]: I0130 14:10:32.429912 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 14:10:34 crc kubenswrapper[4793]: I0130 14:10:34.135607 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 14:10:36 crc kubenswrapper[4793]: I0130 14:10:36.398657 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:10:36 crc kubenswrapper[4793]: E0130 14:10:36.399607 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.308241 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cwn45"] Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.310596 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.358659 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-utilities\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.358898 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-catalog-content\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.359137 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpjd7\" (UniqueName: \"kubernetes.io/projected/ea9c91d0-f921-4b9e-a37b-9d50419d506e-kube-api-access-rpjd7\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.413205 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwn45"] Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.429178 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.429232 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.460354 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpjd7\" (UniqueName: \"kubernetes.io/projected/ea9c91d0-f921-4b9e-a37b-9d50419d506e-kube-api-access-rpjd7\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.460494 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-utilities\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.460530 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-catalog-content\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.461314 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-utilities\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.461425 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-catalog-content\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.481109 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpjd7\" (UniqueName: \"kubernetes.io/projected/ea9c91d0-f921-4b9e-a37b-9d50419d506e-kube-api-access-rpjd7\") pod \"certified-operators-cwn45\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:37 crc kubenswrapper[4793]: I0130 14:10:37.631423 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:38 crc kubenswrapper[4793]: I0130 14:10:38.126034 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwn45"] Jan 30 14:10:38 crc kubenswrapper[4793]: I0130 14:10:38.442223 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="02223b96-2b8b-4d32-b7ba-9cb517e03f13" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:10:38 crc kubenswrapper[4793]: I0130 14:10:38.442300 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="02223b96-2b8b-4d32-b7ba-9cb517e03f13" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:10:38 crc kubenswrapper[4793]: I0130 14:10:38.480462 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 14:10:38 crc kubenswrapper[4793]: I0130 14:10:38.480507 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 14:10:38 crc kubenswrapper[4793]: I0130 14:10:38.826856 4793 generic.go:334] "Generic (PLEG): container finished" podID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerID="6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b" exitCode=0 Jan 30 14:10:38 crc kubenswrapper[4793]: I0130 14:10:38.826907 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwn45" event={"ID":"ea9c91d0-f921-4b9e-a37b-9d50419d506e","Type":"ContainerDied","Data":"6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b"} Jan 30 14:10:38 crc kubenswrapper[4793]: I0130 14:10:38.826956 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwn45" event={"ID":"ea9c91d0-f921-4b9e-a37b-9d50419d506e","Type":"ContainerStarted","Data":"4354863d5270a2dd978e9ec14ef4a0fa31ed07055c5a9a9b9bc5612d7fef101e"} Jan 30 14:10:39 crc kubenswrapper[4793]: I0130 14:10:39.136073 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 14:10:39 crc kubenswrapper[4793]: I0130 14:10:39.177875 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 14:10:39 crc kubenswrapper[4793]: I0130 14:10:39.494303 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b4991f7-e6e6-4dfd-a75b-25a7506591e1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:10:39 crc kubenswrapper[4793]: I0130 14:10:39.494993 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b4991f7-e6e6-4dfd-a75b-25a7506591e1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:10:39 crc kubenswrapper[4793]: I0130 14:10:39.837131 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwn45" event={"ID":"ea9c91d0-f921-4b9e-a37b-9d50419d506e","Type":"ContainerStarted","Data":"7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132"} Jan 30 14:10:39 crc kubenswrapper[4793]: I0130 14:10:39.899554 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 14:10:42 crc kubenswrapper[4793]: I0130 14:10:42.619143 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 14:10:42 crc kubenswrapper[4793]: I0130 14:10:42.876460 4793 generic.go:334] "Generic (PLEG): container finished" podID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerID="7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132" exitCode=0 Jan 30 14:10:42 crc kubenswrapper[4793]: I0130 14:10:42.876541 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwn45" event={"ID":"ea9c91d0-f921-4b9e-a37b-9d50419d506e","Type":"ContainerDied","Data":"7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132"} Jan 30 14:10:43 crc kubenswrapper[4793]: I0130 14:10:43.888684 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwn45" event={"ID":"ea9c91d0-f921-4b9e-a37b-9d50419d506e","Type":"ContainerStarted","Data":"482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099"} Jan 30 14:10:43 crc kubenswrapper[4793]: I0130 14:10:43.920267 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cwn45" podStartSLOduration=2.347998179 podStartE2EDuration="6.920243017s" podCreationTimestamp="2026-01-30 14:10:37 +0000 UTC" firstStartedPulling="2026-01-30 14:10:38.828843025 +0000 UTC m=+1649.530191516" lastFinishedPulling="2026-01-30 14:10:43.401087843 +0000 UTC m=+1654.102436354" observedRunningTime="2026-01-30 14:10:43.914414596 +0000 UTC m=+1654.615763167" watchObservedRunningTime="2026-01-30 14:10:43.920243017 +0000 UTC m=+1654.621591518" Jan 30 14:10:47 crc kubenswrapper[4793]: I0130 14:10:47.434702 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 14:10:47 crc kubenswrapper[4793]: I0130 14:10:47.435691 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 14:10:47 crc kubenswrapper[4793]: I0130 14:10:47.440480 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 14:10:47 crc kubenswrapper[4793]: I0130 14:10:47.632807 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:47 crc kubenswrapper[4793]: I0130 14:10:47.633033 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:47 crc kubenswrapper[4793]: I0130 14:10:47.955236 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 14:10:48 crc kubenswrapper[4793]: I0130 14:10:48.494662 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 14:10:48 crc kubenswrapper[4793]: I0130 14:10:48.495504 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 14:10:48 crc kubenswrapper[4793]: I0130 14:10:48.495624 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 14:10:48 crc kubenswrapper[4793]: I0130 14:10:48.502388 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 14:10:48 crc kubenswrapper[4793]: I0130 14:10:48.672990 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cwn45" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="registry-server" probeResult="failure" output=< Jan 30 14:10:48 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:10:48 crc kubenswrapper[4793]: > Jan 30 14:10:48 crc kubenswrapper[4793]: I0130 14:10:48.958882 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 14:10:48 crc kubenswrapper[4793]: I0130 14:10:48.967909 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 14:10:49 crc kubenswrapper[4793]: I0130 14:10:49.398856 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:10:49 crc kubenswrapper[4793]: E0130 14:10:49.399186 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:10:56 crc kubenswrapper[4793]: I0130 14:10:56.749822 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:10:57 crc kubenswrapper[4793]: I0130 14:10:57.751142 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:57 crc kubenswrapper[4793]: I0130 14:10:57.844167 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:58 crc kubenswrapper[4793]: I0130 14:10:58.056930 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwn45"] Jan 30 14:10:58 crc kubenswrapper[4793]: I0130 14:10:58.510336 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:10:59 crc kubenswrapper[4793]: I0130 14:10:59.046651 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cwn45" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="registry-server" containerID="cri-o://482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099" gracePeriod=2 Jan 30 14:10:59 crc kubenswrapper[4793]: I0130 14:10:59.774128 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:10:59 crc kubenswrapper[4793]: I0130 14:10:59.915929 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpjd7\" (UniqueName: \"kubernetes.io/projected/ea9c91d0-f921-4b9e-a37b-9d50419d506e-kube-api-access-rpjd7\") pod \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " Jan 30 14:10:59 crc kubenswrapper[4793]: I0130 14:10:59.915995 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-catalog-content\") pod \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " Jan 30 14:10:59 crc kubenswrapper[4793]: I0130 14:10:59.916028 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-utilities\") pod \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\" (UID: \"ea9c91d0-f921-4b9e-a37b-9d50419d506e\") " Jan 30 14:10:59 crc kubenswrapper[4793]: I0130 14:10:59.917308 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-utilities" (OuterVolumeSpecName: "utilities") pod "ea9c91d0-f921-4b9e-a37b-9d50419d506e" (UID: "ea9c91d0-f921-4b9e-a37b-9d50419d506e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:10:59 crc kubenswrapper[4793]: I0130 14:10:59.938933 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea9c91d0-f921-4b9e-a37b-9d50419d506e-kube-api-access-rpjd7" (OuterVolumeSpecName: "kube-api-access-rpjd7") pod "ea9c91d0-f921-4b9e-a37b-9d50419d506e" (UID: "ea9c91d0-f921-4b9e-a37b-9d50419d506e"). InnerVolumeSpecName "kube-api-access-rpjd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:10:59 crc kubenswrapper[4793]: I0130 14:10:59.998164 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea9c91d0-f921-4b9e-a37b-9d50419d506e" (UID: "ea9c91d0-f921-4b9e-a37b-9d50419d506e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.018146 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpjd7\" (UniqueName: \"kubernetes.io/projected/ea9c91d0-f921-4b9e-a37b-9d50419d506e-kube-api-access-rpjd7\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.018190 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.018202 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c91d0-f921-4b9e-a37b-9d50419d506e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.058073 4793 generic.go:334] "Generic (PLEG): container finished" podID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerID="482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099" exitCode=0 Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.058119 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwn45" event={"ID":"ea9c91d0-f921-4b9e-a37b-9d50419d506e","Type":"ContainerDied","Data":"482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099"} Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.058160 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwn45" event={"ID":"ea9c91d0-f921-4b9e-a37b-9d50419d506e","Type":"ContainerDied","Data":"4354863d5270a2dd978e9ec14ef4a0fa31ed07055c5a9a9b9bc5612d7fef101e"} Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.058159 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwn45" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.058178 4793 scope.go:117] "RemoveContainer" containerID="482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.083827 4793 scope.go:117] "RemoveContainer" containerID="7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.096785 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwn45"] Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.110204 4793 scope.go:117] "RemoveContainer" containerID="6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.148465 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cwn45"] Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.183507 4793 scope.go:117] "RemoveContainer" containerID="482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099" Jan 30 14:11:00 crc kubenswrapper[4793]: E0130 14:11:00.183891 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099\": container with ID starting with 482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099 not found: ID does not exist" containerID="482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.183919 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099"} err="failed to get container status \"482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099\": rpc error: code = NotFound desc = could not find container \"482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099\": container with ID starting with 482d7b83a218f5a900a930d62002277d0edb286c91067420f3d1ffa548266099 not found: ID does not exist" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.183947 4793 scope.go:117] "RemoveContainer" containerID="7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132" Jan 30 14:11:00 crc kubenswrapper[4793]: E0130 14:11:00.184212 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132\": container with ID starting with 7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132 not found: ID does not exist" containerID="7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.184245 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132"} err="failed to get container status \"7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132\": rpc error: code = NotFound desc = could not find container \"7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132\": container with ID starting with 7b4d77e06055d7d69ca735e046cf1b7c995fc4da55c8f5240f3e1c2b8476c132 not found: ID does not exist" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.184259 4793 scope.go:117] "RemoveContainer" containerID="6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b" Jan 30 14:11:00 crc kubenswrapper[4793]: E0130 14:11:00.184457 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b\": container with ID starting with 6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b not found: ID does not exist" containerID="6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.184477 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b"} err="failed to get container status \"6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b\": rpc error: code = NotFound desc = could not find container \"6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b\": container with ID starting with 6b99fa9b0810aaa984fd8f0806aebea9b00f9281eb01a6299bcaaa086e037c4b not found: ID does not exist" Jan 30 14:11:00 crc kubenswrapper[4793]: I0130 14:11:00.409511 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" path="/var/lib/kubelet/pods/ea9c91d0-f921-4b9e-a37b-9d50419d506e/volumes" Jan 30 14:11:01 crc kubenswrapper[4793]: I0130 14:11:01.940105 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerName="rabbitmq" containerID="cri-o://ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa" gracePeriod=604795 Jan 30 14:11:03 crc kubenswrapper[4793]: I0130 14:11:03.398551 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:11:03 crc kubenswrapper[4793]: E0130 14:11:03.398869 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:11:03 crc kubenswrapper[4793]: I0130 14:11:03.455256 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerName="rabbitmq" containerID="cri-o://b985352acd3221df1cd541d3576c66285b247ac814efbffa0d9afc52e1848265" gracePeriod=604796 Jan 30 14:11:06 crc kubenswrapper[4793]: I0130 14:11:06.078451 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 30 14:11:06 crc kubenswrapper[4793]: I0130 14:11:06.216206 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.541677 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.697768 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-config-data\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.697842 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-tls\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.697987 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ab4371b-53c0-41a1-9561-0c02f936c7a7-pod-info\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.698169 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-erlang-cookie\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.698305 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ab4371b-53c0-41a1-9561-0c02f936c7a7-erlang-cookie-secret\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.698333 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-plugins-conf\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.698383 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-confd\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.698450 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-plugins\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.698493 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.698544 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-server-conf\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.699145 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rck4w\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-kube-api-access-rck4w\") pod \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\" (UID: \"0ab4371b-53c0-41a1-9561-0c02f936c7a7\") " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.700455 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.705675 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.706136 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0ab4371b-53c0-41a1-9561-0c02f936c7a7-pod-info" (OuterVolumeSpecName: "pod-info") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.706159 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.706422 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.711116 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.722333 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-kube-api-access-rck4w" (OuterVolumeSpecName: "kube-api-access-rck4w") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "kube-api-access-rck4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.730278 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab4371b-53c0-41a1-9561-0c02f936c7a7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.787557 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-config-data" (OuterVolumeSpecName: "config-data") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803745 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803793 4793 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ab4371b-53c0-41a1-9561-0c02f936c7a7-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803805 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803821 4793 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ab4371b-53c0-41a1-9561-0c02f936c7a7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803831 4793 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803842 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803879 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803892 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rck4w\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-kube-api-access-rck4w\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.803904 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.828367 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.828666 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-server-conf" (OuterVolumeSpecName: "server-conf") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.893178 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0ab4371b-53c0-41a1-9561-0c02f936c7a7" (UID: "0ab4371b-53c0-41a1-9561-0c02f936c7a7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.905885 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ab4371b-53c0-41a1-9561-0c02f936c7a7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.905922 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:08 crc kubenswrapper[4793]: I0130 14:11:08.905935 4793 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ab4371b-53c0-41a1-9561-0c02f936c7a7-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.159013 4793 generic.go:334] "Generic (PLEG): container finished" podID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerID="ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa" exitCode=0 Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.159370 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ab4371b-53c0-41a1-9561-0c02f936c7a7","Type":"ContainerDied","Data":"ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa"} Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.159406 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ab4371b-53c0-41a1-9561-0c02f936c7a7","Type":"ContainerDied","Data":"0efe8f891a233c8e5ac4fe6bb1b425a66ddbc8f34f8412134d77a42240eb7c39"} Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.159428 4793 scope.go:117] "RemoveContainer" containerID="ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.159581 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.230847 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.242861 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.248003 4793 scope.go:117] "RemoveContainer" containerID="06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.293388 4793 scope.go:117] "RemoveContainer" containerID="ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa" Jan 30 14:11:09 crc kubenswrapper[4793]: E0130 14:11:09.299796 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa\": container with ID starting with ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa not found: ID does not exist" containerID="ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.299850 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa"} err="failed to get container status \"ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa\": rpc error: code = NotFound desc = could not find container \"ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa\": container with ID starting with ad53439e877e793da3a16bae319e7183acc1572cc86e0445e340b4cb131764fa not found: ID does not exist" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.299884 4793 scope.go:117] "RemoveContainer" containerID="06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.311596 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:11:09 crc kubenswrapper[4793]: E0130 14:11:09.312222 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="extract-utilities" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.312299 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="extract-utilities" Jan 30 14:11:09 crc kubenswrapper[4793]: E0130 14:11:09.312400 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerName="setup-container" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.312482 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerName="setup-container" Jan 30 14:11:09 crc kubenswrapper[4793]: E0130 14:11:09.312556 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="registry-server" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.312623 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="registry-server" Jan 30 14:11:09 crc kubenswrapper[4793]: E0130 14:11:09.312685 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="extract-content" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.312741 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="extract-content" Jan 30 14:11:09 crc kubenswrapper[4793]: E0130 14:11:09.312808 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerName="rabbitmq" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.312865 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerName="rabbitmq" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.313124 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" containerName="rabbitmq" Jan 30 14:11:09 crc kubenswrapper[4793]: E0130 14:11:09.311662 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48\": container with ID starting with 06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48 not found: ID does not exist" containerID="06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.313248 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48"} err="failed to get container status \"06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48\": rpc error: code = NotFound desc = could not find container \"06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48\": container with ID starting with 06c6f66641d91c20fcb16966deca91b9393e1a792c1e71913c61a87bad5e7d48 not found: ID does not exist" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.313216 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea9c91d0-f921-4b9e-a37b-9d50419d506e" containerName="registry-server" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.314466 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.320687 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4mm4r" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.324497 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.324540 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.324597 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.324634 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.324756 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.325035 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.348030 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414000 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ffc0461-9589-45f5-a656-85cc01de58ed-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414075 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-config-data\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414098 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414141 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414211 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414241 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ffc0461-9589-45f5-a656-85cc01de58ed-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414266 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414291 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414352 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqzqg\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-kube-api-access-vqzqg\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414386 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.414408 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516189 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ffc0461-9589-45f5-a656-85cc01de58ed-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516234 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-config-data\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516259 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516315 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516397 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516425 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ffc0461-9589-45f5-a656-85cc01de58ed-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516445 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516469 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516560 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqzqg\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-kube-api-access-vqzqg\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516631 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.516653 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.517446 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.517522 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-config-data\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.517630 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.517882 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.517932 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.518474 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ffc0461-9589-45f5-a656-85cc01de58ed-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.525801 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ffc0461-9589-45f5-a656-85cc01de58ed-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.528718 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.533557 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqzqg\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-kube-api-access-vqzqg\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.534255 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ffc0461-9589-45f5-a656-85cc01de58ed-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.540039 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ffc0461-9589-45f5-a656-85cc01de58ed-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.602304 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"7ffc0461-9589-45f5-a656-85cc01de58ed\") " pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.636842 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.956822 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-swg98"] Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.958577 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:09 crc kubenswrapper[4793]: I0130 14:11:09.983005 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.018905 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-swg98"] Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.035469 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.035578 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.035610 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.035705 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-config\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.035732 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.035757 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.035862 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqz2m\" (UniqueName: \"kubernetes.io/projected/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-kube-api-access-jqz2m\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.137784 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.137854 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.137877 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.137914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-config\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.137935 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.137956 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.138026 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqz2m\" (UniqueName: \"kubernetes.io/projected/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-kube-api-access-jqz2m\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.139017 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.139524 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.140009 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.140539 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.140849 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.141030 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-config\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.167013 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqz2m\" (UniqueName: \"kubernetes.io/projected/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-kube-api-access-jqz2m\") pod \"dnsmasq-dns-79bd4cc8c9-swg98\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.196732 4793 generic.go:334] "Generic (PLEG): container finished" podID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerID="b985352acd3221df1cd541d3576c66285b247ac814efbffa0d9afc52e1848265" exitCode=0 Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.196884 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a4cd276-23a5-4acb-bb1b-41470a11c945","Type":"ContainerDied","Data":"b985352acd3221df1cd541d3576c66285b247ac814efbffa0d9afc52e1848265"} Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.290078 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.304762 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.439357 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab4371b-53c0-41a1-9561-0c02f936c7a7" path="/var/lib/kubelet/pods/0ab4371b-53c0-41a1-9561-0c02f936c7a7/volumes" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.448635 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455496 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-plugins-conf\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455589 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-erlang-cookie\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455633 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a4cd276-23a5-4acb-bb1b-41470a11c945-erlang-cookie-secret\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455699 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-tls\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455719 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-plugins\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455735 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-server-conf\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455756 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-confd\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455775 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f59v5\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-kube-api-access-f59v5\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455805 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a4cd276-23a5-4acb-bb1b-41470a11c945-pod-info\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455909 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.455953 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-config-data\") pod \"5a4cd276-23a5-4acb-bb1b-41470a11c945\" (UID: \"5a4cd276-23a5-4acb-bb1b-41470a11c945\") " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.465665 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.477368 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.493632 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-kube-api-access-f59v5" (OuterVolumeSpecName: "kube-api-access-f59v5") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "kube-api-access-f59v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.495600 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4cd276-23a5-4acb-bb1b-41470a11c945-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.498676 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.499769 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.505699 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.508753 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5a4cd276-23a5-4acb-bb1b-41470a11c945-pod-info" (OuterVolumeSpecName: "pod-info") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.559003 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.559036 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.559048 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f59v5\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-kube-api-access-f59v5\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.559084 4793 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a4cd276-23a5-4acb-bb1b-41470a11c945-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.559112 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.559121 4793 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.559130 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.559145 4793 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a4cd276-23a5-4acb-bb1b-41470a11c945-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.632300 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-config-data" (OuterVolumeSpecName: "config-data") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.656164 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-server-conf" (OuterVolumeSpecName: "server-conf") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.660731 4793 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.660900 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a4cd276-23a5-4acb-bb1b-41470a11c945-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.695021 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.746645 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5a4cd276-23a5-4acb-bb1b-41470a11c945" (UID: "5a4cd276-23a5-4acb-bb1b-41470a11c945"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.763487 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a4cd276-23a5-4acb-bb1b-41470a11c945-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:10 crc kubenswrapper[4793]: I0130 14:11:10.763528 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.030015 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-swg98"] Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.221434 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a4cd276-23a5-4acb-bb1b-41470a11c945","Type":"ContainerDied","Data":"49420acdae0565905cd8f73dba3384bd4f0c8ed41985335ead11f16b3b125159"} Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.221482 4793 scope.go:117] "RemoveContainer" containerID="b985352acd3221df1cd541d3576c66285b247ac814efbffa0d9afc52e1848265" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.221636 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.226590 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ffc0461-9589-45f5-a656-85cc01de58ed","Type":"ContainerStarted","Data":"b126c034f300df436262ee7b232720f4860c063847d40c54826a736a9bb22ffb"} Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.227617 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" event={"ID":"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b","Type":"ContainerStarted","Data":"ad26c96752807da90d4235406116a1597523e7ece85d333a17d15f0f529f2705"} Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.273172 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.276691 4793 scope.go:117] "RemoveContainer" containerID="d616170562eeb4ba00ef47dc4bae339cb080a28d5310b1ec237e9ad217b38991" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.283320 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.298835 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:11:11 crc kubenswrapper[4793]: E0130 14:11:11.299290 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerName="rabbitmq" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.299308 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerName="rabbitmq" Jan 30 14:11:11 crc kubenswrapper[4793]: E0130 14:11:11.299336 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerName="setup-container" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.299343 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerName="setup-container" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.299516 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" containerName="rabbitmq" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.300456 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.312528 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.317877 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.318131 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.318242 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dkqxx" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.318397 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.318498 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.318593 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.318526 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374401 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374436 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374491 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374514 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwkd5\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-kube-api-access-jwkd5\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374545 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374563 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374621 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374646 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374675 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b0247ba-adfd-4195-bf23-91478001fed7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374691 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b0247ba-adfd-4195-bf23-91478001fed7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.374713 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.478608 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.478656 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.478741 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.478769 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwkd5\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-kube-api-access-jwkd5\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.478818 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.478844 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.478901 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.478956 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.479002 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b0247ba-adfd-4195-bf23-91478001fed7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.479025 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b0247ba-adfd-4195-bf23-91478001fed7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.479094 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.479155 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.479321 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.479715 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.480161 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.480905 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.486138 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.486808 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b0247ba-adfd-4195-bf23-91478001fed7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.487619 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b0247ba-adfd-4195-bf23-91478001fed7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.488551 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.501625 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b0247ba-adfd-4195-bf23-91478001fed7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.531962 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwkd5\" (UniqueName: \"kubernetes.io/projected/3b0247ba-adfd-4195-bf23-91478001fed7-kube-api-access-jwkd5\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.642400 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3b0247ba-adfd-4195-bf23-91478001fed7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:11 crc kubenswrapper[4793]: I0130 14:11:11.719923 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:12 crc kubenswrapper[4793]: I0130 14:11:12.240951 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ffc0461-9589-45f5-a656-85cc01de58ed","Type":"ContainerStarted","Data":"b78b95b51eca377e41ebaa0a23cb9ab290a9ef1905c2ed2332706169e67ce242"} Jan 30 14:11:12 crc kubenswrapper[4793]: I0130 14:11:12.244476 4793 generic.go:334] "Generic (PLEG): container finished" podID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" containerID="942b9a91649d88b76815dae7ef5ceda6f5ba7882083b88d098feb75a679ceddd" exitCode=0 Jan 30 14:11:12 crc kubenswrapper[4793]: I0130 14:11:12.244516 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" event={"ID":"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b","Type":"ContainerDied","Data":"942b9a91649d88b76815dae7ef5ceda6f5ba7882083b88d098feb75a679ceddd"} Jan 30 14:11:12 crc kubenswrapper[4793]: I0130 14:11:12.423744 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a4cd276-23a5-4acb-bb1b-41470a11c945" path="/var/lib/kubelet/pods/5a4cd276-23a5-4acb-bb1b-41470a11c945/volumes" Jan 30 14:11:12 crc kubenswrapper[4793]: I0130 14:11:12.435857 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:11:13 crc kubenswrapper[4793]: I0130 14:11:13.255073 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3b0247ba-adfd-4195-bf23-91478001fed7","Type":"ContainerStarted","Data":"c7c8132d652f1c852c160648dbfd496d7ed534aa237703b5ad385eb046c3abbd"} Jan 30 14:11:13 crc kubenswrapper[4793]: I0130 14:11:13.258855 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" event={"ID":"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b","Type":"ContainerStarted","Data":"2a32dca8cb61b9289690294b5f09f596754cf5c4a8d30bb00d21441bb933964e"} Jan 30 14:11:13 crc kubenswrapper[4793]: I0130 14:11:13.258909 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:13 crc kubenswrapper[4793]: I0130 14:11:13.285987 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" podStartSLOduration=4.28596617 podStartE2EDuration="4.28596617s" podCreationTimestamp="2026-01-30 14:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:11:13.274108117 +0000 UTC m=+1683.975456618" watchObservedRunningTime="2026-01-30 14:11:13.28596617 +0000 UTC m=+1683.987314671" Jan 30 14:11:14 crc kubenswrapper[4793]: I0130 14:11:14.272191 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3b0247ba-adfd-4195-bf23-91478001fed7","Type":"ContainerStarted","Data":"8cfc8cd39798a1f8a2ba8f639e157a037ab2e66ed79db4999cad2e83c92d49c8"} Jan 30 14:11:14 crc kubenswrapper[4793]: I0130 14:11:14.398368 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:11:14 crc kubenswrapper[4793]: E0130 14:11:14.398642 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.307466 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.387968 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-cxkd2"] Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.388272 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" podUID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" containerName="dnsmasq-dns" containerID="cri-o://1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4" gracePeriod=10 Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.623670 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-5bm62"] Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.626377 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.671350 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-5bm62"] Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.772227 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-config\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.772288 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swswb\" (UniqueName: \"kubernetes.io/projected/b3e8eb28-c303-409b-a89b-b273b2f56fff-kube-api-access-swswb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.772345 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.772550 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.772718 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.772834 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.773017 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.875017 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-config\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.875596 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swswb\" (UniqueName: \"kubernetes.io/projected/b3e8eb28-c303-409b-a89b-b273b2f56fff-kube-api-access-swswb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.875639 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.875710 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.875795 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.875855 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.875914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.876201 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-config\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.876800 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.876875 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.877593 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.877795 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.878110 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3e8eb28-c303-409b-a89b-b273b2f56fff-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.898263 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swswb\" (UniqueName: \"kubernetes.io/projected/b3e8eb28-c303-409b-a89b-b273b2f56fff-kube-api-access-swswb\") pod \"dnsmasq-dns-6ff66b85ff-5bm62\" (UID: \"b3e8eb28-c303-409b-a89b-b273b2f56fff\") " pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:20 crc kubenswrapper[4793]: I0130 14:11:20.956013 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.109838 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.181521 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-sb\") pod \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.181618 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-nb\") pod \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.181872 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-swift-storage-0\") pod \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.181991 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-config\") pod \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.182033 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-svc\") pod \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.182087 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wjfh\" (UniqueName: \"kubernetes.io/projected/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-kube-api-access-9wjfh\") pod \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\" (UID: \"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1\") " Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.187755 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-kube-api-access-9wjfh" (OuterVolumeSpecName: "kube-api-access-9wjfh") pod "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" (UID: "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1"). InnerVolumeSpecName "kube-api-access-9wjfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.283915 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wjfh\" (UniqueName: \"kubernetes.io/projected/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-kube-api-access-9wjfh\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.338144 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-5bm62"] Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.342118 4793 generic.go:334] "Generic (PLEG): container finished" podID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" containerID="1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4" exitCode=0 Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.342238 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" event={"ID":"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1","Type":"ContainerDied","Data":"1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4"} Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.342312 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" event={"ID":"a7422c6a-9c17-4ea4-bae8-9006e19fc4c1","Type":"ContainerDied","Data":"78fb92af330aba5ae85ee09e8c30d31dd6612ee663286c5bea03ea04be9abef3"} Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.342374 4793 scope.go:117] "RemoveContainer" containerID="1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.343498 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-cxkd2" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.361577 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" (UID: "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.372510 4793 scope.go:117] "RemoveContainer" containerID="0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.373197 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" (UID: "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.377663 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" (UID: "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.386026 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.386103 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.386118 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.399477 4793 scope.go:117] "RemoveContainer" containerID="1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4" Jan 30 14:11:21 crc kubenswrapper[4793]: E0130 14:11:21.400584 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4\": container with ID starting with 1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4 not found: ID does not exist" containerID="1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.400620 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4"} err="failed to get container status \"1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4\": rpc error: code = NotFound desc = could not find container \"1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4\": container with ID starting with 1ebc05973335cd0dac1ac2b208cd0f00ea45ad92f9f5cc7fc6517e9852648fc4 not found: ID does not exist" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.400643 4793 scope.go:117] "RemoveContainer" containerID="0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889" Jan 30 14:11:21 crc kubenswrapper[4793]: E0130 14:11:21.401765 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889\": container with ID starting with 0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889 not found: ID does not exist" containerID="0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.401808 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889"} err="failed to get container status \"0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889\": rpc error: code = NotFound desc = could not find container \"0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889\": container with ID starting with 0ed268e51f99d2c135e3190e2dee2366648fd155346f19677adddd196ebb1889 not found: ID does not exist" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.405110 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-config" (OuterVolumeSpecName: "config") pod "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" (UID: "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.408508 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" (UID: "a7422c6a-9c17-4ea4-bae8-9006e19fc4c1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.487580 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.488015 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.703966 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-cxkd2"] Jan 30 14:11:21 crc kubenswrapper[4793]: I0130 14:11:21.715110 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-cxkd2"] Jan 30 14:11:22 crc kubenswrapper[4793]: I0130 14:11:22.354472 4793 generic.go:334] "Generic (PLEG): container finished" podID="b3e8eb28-c303-409b-a89b-b273b2f56fff" containerID="edaded44b57086b3e7c84221f1f47f36c4cc2427d1e444f44e5430172c9e82d2" exitCode=0 Jan 30 14:11:22 crc kubenswrapper[4793]: I0130 14:11:22.354527 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" event={"ID":"b3e8eb28-c303-409b-a89b-b273b2f56fff","Type":"ContainerDied","Data":"edaded44b57086b3e7c84221f1f47f36c4cc2427d1e444f44e5430172c9e82d2"} Jan 30 14:11:22 crc kubenswrapper[4793]: I0130 14:11:22.354563 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" event={"ID":"b3e8eb28-c303-409b-a89b-b273b2f56fff","Type":"ContainerStarted","Data":"8dee820bbac36fa286cdb5cc61dcdf27fa6218771c3044009cf48d9ef23c5b9b"} Jan 30 14:11:22 crc kubenswrapper[4793]: I0130 14:11:22.414966 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" path="/var/lib/kubelet/pods/a7422c6a-9c17-4ea4-bae8-9006e19fc4c1/volumes" Jan 30 14:11:23 crc kubenswrapper[4793]: I0130 14:11:23.364396 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" event={"ID":"b3e8eb28-c303-409b-a89b-b273b2f56fff","Type":"ContainerStarted","Data":"73d9105cd08f1683fc3700f4a2cacf52c2e7d1cdf04ec141f1fe5704fbdea46a"} Jan 30 14:11:23 crc kubenswrapper[4793]: I0130 14:11:23.364734 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:23 crc kubenswrapper[4793]: I0130 14:11:23.390254 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" podStartSLOduration=3.390231331 podStartE2EDuration="3.390231331s" podCreationTimestamp="2026-01-30 14:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:11:23.381668659 +0000 UTC m=+1694.083017170" watchObservedRunningTime="2026-01-30 14:11:23.390231331 +0000 UTC m=+1694.091579822" Jan 30 14:11:29 crc kubenswrapper[4793]: I0130 14:11:29.399674 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:11:29 crc kubenswrapper[4793]: E0130 14:11:29.400566 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:11:30 crc kubenswrapper[4793]: I0130 14:11:30.959287 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ff66b85ff-5bm62" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.030888 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-swg98"] Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.031400 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" podUID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" containerName="dnsmasq-dns" containerID="cri-o://2a32dca8cb61b9289690294b5f09f596754cf5c4a8d30bb00d21441bb933964e" gracePeriod=10 Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.441113 4793 generic.go:334] "Generic (PLEG): container finished" podID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" containerID="2a32dca8cb61b9289690294b5f09f596754cf5c4a8d30bb00d21441bb933964e" exitCode=0 Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.441185 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" event={"ID":"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b","Type":"ContainerDied","Data":"2a32dca8cb61b9289690294b5f09f596754cf5c4a8d30bb00d21441bb933964e"} Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.710807 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.800411 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-openstack-edpm-ipam\") pod \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.800664 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqz2m\" (UniqueName: \"kubernetes.io/projected/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-kube-api-access-jqz2m\") pod \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.800761 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-sb\") pod \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.801097 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-nb\") pod \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.801191 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-config\") pod \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.801330 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-swift-storage-0\") pod \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.801411 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-svc\") pod \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\" (UID: \"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b\") " Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.825630 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-kube-api-access-jqz2m" (OuterVolumeSpecName: "kube-api-access-jqz2m") pod "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" (UID: "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b"). InnerVolumeSpecName "kube-api-access-jqz2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.850385 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" (UID: "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.850455 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" (UID: "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.853545 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" (UID: "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.854884 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" (UID: "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.863959 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-config" (OuterVolumeSpecName: "config") pod "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" (UID: "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.876834 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" (UID: "da21a74a-6a8e-4c6f-b7de-4eb33b40d85b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.903965 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.904001 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.904015 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.904027 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.904037 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.904077 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:31 crc kubenswrapper[4793]: I0130 14:11:31.904086 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqz2m\" (UniqueName: \"kubernetes.io/projected/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b-kube-api-access-jqz2m\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:32 crc kubenswrapper[4793]: I0130 14:11:32.451441 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" event={"ID":"da21a74a-6a8e-4c6f-b7de-4eb33b40d85b","Type":"ContainerDied","Data":"ad26c96752807da90d4235406116a1597523e7ece85d333a17d15f0f529f2705"} Jan 30 14:11:32 crc kubenswrapper[4793]: I0130 14:11:32.451767 4793 scope.go:117] "RemoveContainer" containerID="2a32dca8cb61b9289690294b5f09f596754cf5c4a8d30bb00d21441bb933964e" Jan 30 14:11:32 crc kubenswrapper[4793]: I0130 14:11:32.451493 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-swg98" Jan 30 14:11:32 crc kubenswrapper[4793]: I0130 14:11:32.483436 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-swg98"] Jan 30 14:11:32 crc kubenswrapper[4793]: I0130 14:11:32.489695 4793 scope.go:117] "RemoveContainer" containerID="942b9a91649d88b76815dae7ef5ceda6f5ba7882083b88d098feb75a679ceddd" Jan 30 14:11:32 crc kubenswrapper[4793]: I0130 14:11:32.493992 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-swg98"] Jan 30 14:11:34 crc kubenswrapper[4793]: I0130 14:11:34.411745 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" path="/var/lib/kubelet/pods/da21a74a-6a8e-4c6f-b7de-4eb33b40d85b/volumes" Jan 30 14:11:40 crc kubenswrapper[4793]: I0130 14:11:40.408901 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:11:40 crc kubenswrapper[4793]: E0130 14:11:40.409721 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:11:44 crc kubenswrapper[4793]: I0130 14:11:44.930181 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-vsdkv" podUID="fd03c93b-a2a7-4a2f-9292-29c4e7fe9640" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 14:11:45 crc kubenswrapper[4793]: I0130 14:11:45.894801 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-g9hvr" podUID="519ea47c-0d76-44cb-af34-823c71e508c9" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 14:11:46 crc kubenswrapper[4793]: I0130 14:11:46.659232 4793 generic.go:334] "Generic (PLEG): container finished" podID="7ffc0461-9589-45f5-a656-85cc01de58ed" containerID="b78b95b51eca377e41ebaa0a23cb9ab290a9ef1905c2ed2332706169e67ce242" exitCode=0 Jan 30 14:11:46 crc kubenswrapper[4793]: I0130 14:11:46.659334 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ffc0461-9589-45f5-a656-85cc01de58ed","Type":"ContainerDied","Data":"b78b95b51eca377e41ebaa0a23cb9ab290a9ef1905c2ed2332706169e67ce242"} Jan 30 14:11:46 crc kubenswrapper[4793]: I0130 14:11:46.662092 4793 generic.go:334] "Generic (PLEG): container finished" podID="3b0247ba-adfd-4195-bf23-91478001fed7" containerID="8cfc8cd39798a1f8a2ba8f639e157a037ab2e66ed79db4999cad2e83c92d49c8" exitCode=0 Jan 30 14:11:46 crc kubenswrapper[4793]: I0130 14:11:46.662125 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3b0247ba-adfd-4195-bf23-91478001fed7","Type":"ContainerDied","Data":"8cfc8cd39798a1f8a2ba8f639e157a037ab2e66ed79db4999cad2e83c92d49c8"} Jan 30 14:11:46 crc kubenswrapper[4793]: I0130 14:11:46.818185 4793 scope.go:117] "RemoveContainer" containerID="915b433bd8f492e1285f7731f190606a27443ef65efaea3a89e0a1143cdf8065" Jan 30 14:11:46 crc kubenswrapper[4793]: I0130 14:11:46.898664 4793 scope.go:117] "RemoveContainer" containerID="0a03fc4fb64bbc55f9e83e2df3c5192020b95575ac83335c13e52269467122b8" Jan 30 14:11:46 crc kubenswrapper[4793]: I0130 14:11:46.953338 4793 scope.go:117] "RemoveContainer" containerID="d6ac5e8cc6b63af60a4456f31c6bd2647365686983f5e5af22d83b768d333382" Jan 30 14:11:47 crc kubenswrapper[4793]: I0130 14:11:47.674630 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3b0247ba-adfd-4195-bf23-91478001fed7","Type":"ContainerStarted","Data":"4ad631a244ea3a62ebbde0b0673b298753063f8dfc7ec291e85b02e61c0cf71b"} Jan 30 14:11:47 crc kubenswrapper[4793]: I0130 14:11:47.677130 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ffc0461-9589-45f5-a656-85cc01de58ed","Type":"ContainerStarted","Data":"c0d7bf6ddb176fb2e5c090a7298d794e3f968020a1664efaef051a3ba34d4fe8"} Jan 30 14:11:47 crc kubenswrapper[4793]: I0130 14:11:47.678275 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 14:11:47 crc kubenswrapper[4793]: I0130 14:11:47.716645 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.716624633 podStartE2EDuration="38.716624633s" podCreationTimestamp="2026-01-30 14:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:11:47.711756632 +0000 UTC m=+1718.413105133" watchObservedRunningTime="2026-01-30 14:11:47.716624633 +0000 UTC m=+1718.417973124" Jan 30 14:11:48 crc kubenswrapper[4793]: I0130 14:11:48.684114 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:11:48 crc kubenswrapper[4793]: I0130 14:11:48.711512 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.711493226 podStartE2EDuration="37.711493226s" podCreationTimestamp="2026-01-30 14:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:11:48.702401782 +0000 UTC m=+1719.403750273" watchObservedRunningTime="2026-01-30 14:11:48.711493226 +0000 UTC m=+1719.412841717" Jan 30 14:11:52 crc kubenswrapper[4793]: I0130 14:11:52.398856 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:11:52 crc kubenswrapper[4793]: E0130 14:11:52.399256 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.174020 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8"] Jan 30 14:11:59 crc kubenswrapper[4793]: E0130 14:11:59.177426 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" containerName="dnsmasq-dns" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.177675 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" containerName="dnsmasq-dns" Jan 30 14:11:59 crc kubenswrapper[4793]: E0130 14:11:59.177973 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" containerName="init" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.178300 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" containerName="init" Jan 30 14:11:59 crc kubenswrapper[4793]: E0130 14:11:59.178373 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" containerName="init" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.178429 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" containerName="init" Jan 30 14:11:59 crc kubenswrapper[4793]: E0130 14:11:59.178501 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" containerName="dnsmasq-dns" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.178572 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" containerName="dnsmasq-dns" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.178882 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7422c6a-9c17-4ea4-bae8-9006e19fc4c1" containerName="dnsmasq-dns" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.178991 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="da21a74a-6a8e-4c6f-b7de-4eb33b40d85b" containerName="dnsmasq-dns" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.179795 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.183121 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.183437 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.183844 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.185625 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.186625 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8"] Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.267380 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq2gj\" (UniqueName: \"kubernetes.io/projected/03127c65-edbf-41bd-9543-35ae0eddbff6-kube-api-access-dq2gj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.267439 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.267700 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.267820 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.369896 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq2gj\" (UniqueName: \"kubernetes.io/projected/03127c65-edbf-41bd-9543-35ae0eddbff6-kube-api-access-dq2gj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.369962 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.370023 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.370085 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.375778 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.375977 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.377457 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.399456 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq2gj\" (UniqueName: \"kubernetes.io/projected/03127c65-edbf-41bd-9543-35ae0eddbff6-kube-api-access-dq2gj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.530256 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:11:59 crc kubenswrapper[4793]: I0130 14:11:59.640620 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7ffc0461-9589-45f5-a656-85cc01de58ed" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.207:5671: connect: connection refused" Jan 30 14:12:00 crc kubenswrapper[4793]: W0130 14:12:00.894290 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03127c65_edbf_41bd_9543_35ae0eddbff6.slice/crio-75e63c4f5c8ceec53f4ba2de10b538c9e4c3cf56c2f1d9cb3c30a7c4c35acca3 WatchSource:0}: Error finding container 75e63c4f5c8ceec53f4ba2de10b538c9e4c3cf56c2f1d9cb3c30a7c4c35acca3: Status 404 returned error can't find the container with id 75e63c4f5c8ceec53f4ba2de10b538c9e4c3cf56c2f1d9cb3c30a7c4c35acca3 Jan 30 14:12:00 crc kubenswrapper[4793]: I0130 14:12:00.905001 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8"] Jan 30 14:12:01 crc kubenswrapper[4793]: I0130 14:12:01.724462 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="3b0247ba-adfd-4195-bf23-91478001fed7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.209:5671: connect: connection refused" Jan 30 14:12:01 crc kubenswrapper[4793]: I0130 14:12:01.795413 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" event={"ID":"03127c65-edbf-41bd-9543-35ae0eddbff6","Type":"ContainerStarted","Data":"75e63c4f5c8ceec53f4ba2de10b538c9e4c3cf56c2f1d9cb3c30a7c4c35acca3"} Jan 30 14:12:03 crc kubenswrapper[4793]: I0130 14:12:03.398661 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:12:03 crc kubenswrapper[4793]: E0130 14:12:03.399017 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:12:09 crc kubenswrapper[4793]: I0130 14:12:09.639882 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 14:12:11 crc kubenswrapper[4793]: I0130 14:12:11.723232 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:12:14 crc kubenswrapper[4793]: I0130 14:12:14.401084 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:12:14 crc kubenswrapper[4793]: E0130 14:12:14.401647 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:12:16 crc kubenswrapper[4793]: E0130 14:12:16.092321 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 30 14:12:16 crc kubenswrapper[4793]: E0130 14:12:16.092490 4793 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 14:12:16 crc kubenswrapper[4793]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 30 14:12:16 crc kubenswrapper[4793]: - hosts: all Jan 30 14:12:16 crc kubenswrapper[4793]: strategy: linear Jan 30 14:12:16 crc kubenswrapper[4793]: tasks: Jan 30 14:12:16 crc kubenswrapper[4793]: - name: Enable podified-repos Jan 30 14:12:16 crc kubenswrapper[4793]: become: true Jan 30 14:12:16 crc kubenswrapper[4793]: ansible.builtin.shell: | Jan 30 14:12:16 crc kubenswrapper[4793]: set -euxo pipefail Jan 30 14:12:16 crc kubenswrapper[4793]: pushd /var/tmp Jan 30 14:12:16 crc kubenswrapper[4793]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Jan 30 14:12:16 crc kubenswrapper[4793]: pushd repo-setup-main Jan 30 14:12:16 crc kubenswrapper[4793]: python3 -m venv ./venv Jan 30 14:12:16 crc kubenswrapper[4793]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Jan 30 14:12:16 crc kubenswrapper[4793]: ./venv/bin/repo-setup current-podified -b antelope Jan 30 14:12:16 crc kubenswrapper[4793]: popd Jan 30 14:12:16 crc kubenswrapper[4793]: rm -rf repo-setup-main Jan 30 14:12:16 crc kubenswrapper[4793]: Jan 30 14:12:16 crc kubenswrapper[4793]: Jan 30 14:12:16 crc kubenswrapper[4793]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 30 14:12:16 crc kubenswrapper[4793]: edpm_override_hosts: openstack-edpm-ipam Jan 30 14:12:16 crc kubenswrapper[4793]: edpm_service_type: repo-setup Jan 30 14:12:16 crc kubenswrapper[4793]: Jan 30 14:12:16 crc kubenswrapper[4793]: Jan 30 14:12:16 crc kubenswrapper[4793]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dq2gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8_openstack(03127c65-edbf-41bd-9543-35ae0eddbff6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 30 14:12:16 crc kubenswrapper[4793]: > logger="UnhandledError" Jan 30 14:12:16 crc kubenswrapper[4793]: E0130 14:12:16.093600 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" podUID="03127c65-edbf-41bd-9543-35ae0eddbff6" Jan 30 14:12:16 crc kubenswrapper[4793]: E0130 14:12:16.953546 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" podUID="03127c65-edbf-41bd-9543-35ae0eddbff6" Jan 30 14:12:25 crc kubenswrapper[4793]: I0130 14:12:25.398967 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:12:25 crc kubenswrapper[4793]: E0130 14:12:25.400111 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:12:32 crc kubenswrapper[4793]: I0130 14:12:32.212759 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:12:33 crc kubenswrapper[4793]: I0130 14:12:33.115033 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" event={"ID":"03127c65-edbf-41bd-9543-35ae0eddbff6","Type":"ContainerStarted","Data":"7b11af670b73401f4802a9bea647881a00e8ba16559b8a2c4149777c928f19f1"} Jan 30 14:12:33 crc kubenswrapper[4793]: I0130 14:12:33.142702 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" podStartSLOduration=2.829010543 podStartE2EDuration="34.142685316s" podCreationTimestamp="2026-01-30 14:11:59 +0000 UTC" firstStartedPulling="2026-01-30 14:12:00.896807838 +0000 UTC m=+1731.598156329" lastFinishedPulling="2026-01-30 14:12:32.210482611 +0000 UTC m=+1762.911831102" observedRunningTime="2026-01-30 14:12:33.136612297 +0000 UTC m=+1763.837960788" watchObservedRunningTime="2026-01-30 14:12:33.142685316 +0000 UTC m=+1763.844033797" Jan 30 14:12:37 crc kubenswrapper[4793]: I0130 14:12:37.398596 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:12:37 crc kubenswrapper[4793]: E0130 14:12:37.399437 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:12:45 crc kubenswrapper[4793]: I0130 14:12:45.249658 4793 generic.go:334] "Generic (PLEG): container finished" podID="03127c65-edbf-41bd-9543-35ae0eddbff6" containerID="7b11af670b73401f4802a9bea647881a00e8ba16559b8a2c4149777c928f19f1" exitCode=0 Jan 30 14:12:45 crc kubenswrapper[4793]: I0130 14:12:45.249755 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" event={"ID":"03127c65-edbf-41bd-9543-35ae0eddbff6","Type":"ContainerDied","Data":"7b11af670b73401f4802a9bea647881a00e8ba16559b8a2c4149777c928f19f1"} Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.684104 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.802316 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-ssh-key-openstack-edpm-ipam\") pod \"03127c65-edbf-41bd-9543-35ae0eddbff6\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.802458 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-repo-setup-combined-ca-bundle\") pod \"03127c65-edbf-41bd-9543-35ae0eddbff6\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.802618 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-inventory\") pod \"03127c65-edbf-41bd-9543-35ae0eddbff6\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.802711 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq2gj\" (UniqueName: \"kubernetes.io/projected/03127c65-edbf-41bd-9543-35ae0eddbff6-kube-api-access-dq2gj\") pod \"03127c65-edbf-41bd-9543-35ae0eddbff6\" (UID: \"03127c65-edbf-41bd-9543-35ae0eddbff6\") " Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.811564 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03127c65-edbf-41bd-9543-35ae0eddbff6-kube-api-access-dq2gj" (OuterVolumeSpecName: "kube-api-access-dq2gj") pod "03127c65-edbf-41bd-9543-35ae0eddbff6" (UID: "03127c65-edbf-41bd-9543-35ae0eddbff6"). InnerVolumeSpecName "kube-api-access-dq2gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.811819 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "03127c65-edbf-41bd-9543-35ae0eddbff6" (UID: "03127c65-edbf-41bd-9543-35ae0eddbff6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.832605 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "03127c65-edbf-41bd-9543-35ae0eddbff6" (UID: "03127c65-edbf-41bd-9543-35ae0eddbff6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.837247 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-inventory" (OuterVolumeSpecName: "inventory") pod "03127c65-edbf-41bd-9543-35ae0eddbff6" (UID: "03127c65-edbf-41bd-9543-35ae0eddbff6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.904534 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.904569 4793 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.904580 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03127c65-edbf-41bd-9543-35ae0eddbff6-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:12:46 crc kubenswrapper[4793]: I0130 14:12:46.904589 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq2gj\" (UniqueName: \"kubernetes.io/projected/03127c65-edbf-41bd-9543-35ae0eddbff6-kube-api-access-dq2gj\") on node \"crc\" DevicePath \"\"" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.270070 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" event={"ID":"03127c65-edbf-41bd-9543-35ae0eddbff6","Type":"ContainerDied","Data":"75e63c4f5c8ceec53f4ba2de10b538c9e4c3cf56c2f1d9cb3c30a7c4c35acca3"} Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.270113 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75e63c4f5c8ceec53f4ba2de10b538c9e4c3cf56c2f1d9cb3c30a7c4c35acca3" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.270173 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.408517 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5"] Jan 30 14:12:47 crc kubenswrapper[4793]: E0130 14:12:47.409192 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03127c65-edbf-41bd-9543-35ae0eddbff6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.409293 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="03127c65-edbf-41bd-9543-35ae0eddbff6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.409565 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="03127c65-edbf-41bd-9543-35ae0eddbff6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.410296 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.412439 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.413253 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.413417 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.414239 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.420157 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5"] Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.520774 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x755c\" (UniqueName: \"kubernetes.io/projected/b89c70f6-dabd-4984-8f21-235a9ab2f307-kube-api-access-x755c\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.520900 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.521150 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.622770 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.622871 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x755c\" (UniqueName: \"kubernetes.io/projected/b89c70f6-dabd-4984-8f21-235a9ab2f307-kube-api-access-x755c\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.622926 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.626457 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.629601 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.647100 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x755c\" (UniqueName: \"kubernetes.io/projected/b89c70f6-dabd-4984-8f21-235a9ab2f307-kube-api-access-x755c\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t7bl5\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.732784 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:47 crc kubenswrapper[4793]: I0130 14:12:47.758518 4793 scope.go:117] "RemoveContainer" containerID="c0abfc20236991093d7e8e2afcdd95243ff40e4122ba5c47744049c4a654a438" Jan 30 14:12:48 crc kubenswrapper[4793]: W0130 14:12:48.336351 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb89c70f6_dabd_4984_8f21_235a9ab2f307.slice/crio-2a49ceb4b7dbf82deecb11fb0c020251ebb2772505ff22b814869fb7dfd8f913 WatchSource:0}: Error finding container 2a49ceb4b7dbf82deecb11fb0c020251ebb2772505ff22b814869fb7dfd8f913: Status 404 returned error can't find the container with id 2a49ceb4b7dbf82deecb11fb0c020251ebb2772505ff22b814869fb7dfd8f913 Jan 30 14:12:48 crc kubenswrapper[4793]: I0130 14:12:48.336874 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5"] Jan 30 14:12:48 crc kubenswrapper[4793]: I0130 14:12:48.398635 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:12:48 crc kubenswrapper[4793]: E0130 14:12:48.398886 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:12:49 crc kubenswrapper[4793]: I0130 14:12:49.294363 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" event={"ID":"b89c70f6-dabd-4984-8f21-235a9ab2f307","Type":"ContainerStarted","Data":"2a49ceb4b7dbf82deecb11fb0c020251ebb2772505ff22b814869fb7dfd8f913"} Jan 30 14:12:50 crc kubenswrapper[4793]: I0130 14:12:50.305692 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" event={"ID":"b89c70f6-dabd-4984-8f21-235a9ab2f307","Type":"ContainerStarted","Data":"d115148b62a0b6bbfe89b6c2eecac629107d624be74203eefd689a847c0d0cc0"} Jan 30 14:12:50 crc kubenswrapper[4793]: I0130 14:12:50.324896 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" podStartSLOduration=2.449799429 podStartE2EDuration="3.324878225s" podCreationTimestamp="2026-01-30 14:12:47 +0000 UTC" firstStartedPulling="2026-01-30 14:12:48.33928305 +0000 UTC m=+1779.040631541" lastFinishedPulling="2026-01-30 14:12:49.214361846 +0000 UTC m=+1779.915710337" observedRunningTime="2026-01-30 14:12:50.323574173 +0000 UTC m=+1781.024922664" watchObservedRunningTime="2026-01-30 14:12:50.324878225 +0000 UTC m=+1781.026226716" Jan 30 14:12:52 crc kubenswrapper[4793]: I0130 14:12:52.324720 4793 generic.go:334] "Generic (PLEG): container finished" podID="b89c70f6-dabd-4984-8f21-235a9ab2f307" containerID="d115148b62a0b6bbfe89b6c2eecac629107d624be74203eefd689a847c0d0cc0" exitCode=0 Jan 30 14:12:52 crc kubenswrapper[4793]: I0130 14:12:52.324844 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" event={"ID":"b89c70f6-dabd-4984-8f21-235a9ab2f307","Type":"ContainerDied","Data":"d115148b62a0b6bbfe89b6c2eecac629107d624be74203eefd689a847c0d0cc0"} Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.787180 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.852300 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-ssh-key-openstack-edpm-ipam\") pod \"b89c70f6-dabd-4984-8f21-235a9ab2f307\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.852389 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x755c\" (UniqueName: \"kubernetes.io/projected/b89c70f6-dabd-4984-8f21-235a9ab2f307-kube-api-access-x755c\") pod \"b89c70f6-dabd-4984-8f21-235a9ab2f307\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.852489 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-inventory\") pod \"b89c70f6-dabd-4984-8f21-235a9ab2f307\" (UID: \"b89c70f6-dabd-4984-8f21-235a9ab2f307\") " Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.858392 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b89c70f6-dabd-4984-8f21-235a9ab2f307-kube-api-access-x755c" (OuterVolumeSpecName: "kube-api-access-x755c") pod "b89c70f6-dabd-4984-8f21-235a9ab2f307" (UID: "b89c70f6-dabd-4984-8f21-235a9ab2f307"). InnerVolumeSpecName "kube-api-access-x755c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.880528 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b89c70f6-dabd-4984-8f21-235a9ab2f307" (UID: "b89c70f6-dabd-4984-8f21-235a9ab2f307"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.886654 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-inventory" (OuterVolumeSpecName: "inventory") pod "b89c70f6-dabd-4984-8f21-235a9ab2f307" (UID: "b89c70f6-dabd-4984-8f21-235a9ab2f307"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.955142 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x755c\" (UniqueName: \"kubernetes.io/projected/b89c70f6-dabd-4984-8f21-235a9ab2f307-kube-api-access-x755c\") on node \"crc\" DevicePath \"\"" Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.955182 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:12:53 crc kubenswrapper[4793]: I0130 14:12:53.955193 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b89c70f6-dabd-4984-8f21-235a9ab2f307-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.379623 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" event={"ID":"b89c70f6-dabd-4984-8f21-235a9ab2f307","Type":"ContainerDied","Data":"2a49ceb4b7dbf82deecb11fb0c020251ebb2772505ff22b814869fb7dfd8f913"} Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.379899 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a49ceb4b7dbf82deecb11fb0c020251ebb2772505ff22b814869fb7dfd8f913" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.379956 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t7bl5" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.418462 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6"] Jan 30 14:12:54 crc kubenswrapper[4793]: E0130 14:12:54.418880 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b89c70f6-dabd-4984-8f21-235a9ab2f307" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.418905 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b89c70f6-dabd-4984-8f21-235a9ab2f307" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.419158 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b89c70f6-dabd-4984-8f21-235a9ab2f307" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.419872 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.422227 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.422794 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.423122 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.423284 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.431008 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6"] Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.486147 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s7wt\" (UniqueName: \"kubernetes.io/projected/2ba6b544-0042-43d7-abe9-bc40439f804b-kube-api-access-7s7wt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.486235 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.486402 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.486427 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.588447 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.588504 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.588559 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s7wt\" (UniqueName: \"kubernetes.io/projected/2ba6b544-0042-43d7-abe9-bc40439f804b-kube-api-access-7s7wt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.588619 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.593423 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.604453 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.604880 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.611929 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s7wt\" (UniqueName: \"kubernetes.io/projected/2ba6b544-0042-43d7-abe9-bc40439f804b-kube-api-access-7s7wt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:54 crc kubenswrapper[4793]: I0130 14:12:54.792760 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:12:55 crc kubenswrapper[4793]: I0130 14:12:55.295719 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6"] Jan 30 14:12:55 crc kubenswrapper[4793]: I0130 14:12:55.389698 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" event={"ID":"2ba6b544-0042-43d7-abe9-bc40439f804b","Type":"ContainerStarted","Data":"b0a20486d3bd914ea9a743f522b5e81673abd5990bf5c761a63ac5098352d1ae"} Jan 30 14:12:56 crc kubenswrapper[4793]: I0130 14:12:56.430244 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" event={"ID":"2ba6b544-0042-43d7-abe9-bc40439f804b","Type":"ContainerStarted","Data":"9c1a7842b45da0abe44314d798df617c5d0b04f46a40c3ce7525fbfda6de30dd"} Jan 30 14:12:56 crc kubenswrapper[4793]: I0130 14:12:56.430874 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" podStartSLOduration=2.011166983 podStartE2EDuration="2.430849495s" podCreationTimestamp="2026-01-30 14:12:54 +0000 UTC" firstStartedPulling="2026-01-30 14:12:55.30349179 +0000 UTC m=+1786.004840281" lastFinishedPulling="2026-01-30 14:12:55.723174282 +0000 UTC m=+1786.424522793" observedRunningTime="2026-01-30 14:12:56.42701247 +0000 UTC m=+1787.128360971" watchObservedRunningTime="2026-01-30 14:12:56.430849495 +0000 UTC m=+1787.132197986" Jan 30 14:13:03 crc kubenswrapper[4793]: I0130 14:13:03.399171 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:13:03 crc kubenswrapper[4793]: E0130 14:13:03.399744 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:13:17 crc kubenswrapper[4793]: I0130 14:13:17.397765 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:13:17 crc kubenswrapper[4793]: E0130 14:13:17.398334 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:13:28 crc kubenswrapper[4793]: I0130 14:13:28.399755 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:13:28 crc kubenswrapper[4793]: E0130 14:13:28.400488 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:13:40 crc kubenswrapper[4793]: I0130 14:13:40.405397 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:13:40 crc kubenswrapper[4793]: E0130 14:13:40.406399 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:13:48 crc kubenswrapper[4793]: I0130 14:13:48.277293 4793 scope.go:117] "RemoveContainer" containerID="1538087d2c16a6a8f0cfb34ccb93511ff0ccd4bdfcfc4ccc0a63b77916661e9e" Jan 30 14:13:48 crc kubenswrapper[4793]: I0130 14:13:48.310932 4793 scope.go:117] "RemoveContainer" containerID="aa6b97f9cf7eb4c606a580dd2ddef97d729ceaa61803153f00581b30e2022da8" Jan 30 14:13:48 crc kubenswrapper[4793]: I0130 14:13:48.343857 4793 scope.go:117] "RemoveContainer" containerID="a550c028a717096d5e1912e30909f7370216f5f1ecf7d5091df70cd1de2ebf87" Jan 30 14:13:48 crc kubenswrapper[4793]: I0130 14:13:48.366562 4793 scope.go:117] "RemoveContainer" containerID="4e43c7a23f4a490f4a7852a2f22ad1652b89482999fbd5408077c27f4ed89f64" Jan 30 14:13:48 crc kubenswrapper[4793]: I0130 14:13:48.413692 4793 scope.go:117] "RemoveContainer" containerID="9527fe1780f2fb9cca80bad053f2c7ec761fbbe892d439d87f943245f4fb87c3" Jan 30 14:13:48 crc kubenswrapper[4793]: I0130 14:13:48.443305 4793 scope.go:117] "RemoveContainer" containerID="6314864eaec40aa342c30cbdd74ccf5a6317bae25e0440cf92e8eb60bfb0deb4" Jan 30 14:13:48 crc kubenswrapper[4793]: I0130 14:13:48.472961 4793 scope.go:117] "RemoveContainer" containerID="4199787f9fba9bfc02645d135d0bde12d6b02a89d6508f5d6cbf72ca7396c3a8" Jan 30 14:13:48 crc kubenswrapper[4793]: I0130 14:13:48.493316 4793 scope.go:117] "RemoveContainer" containerID="0f0a92b67bf2c57b29668defe80c5ef06174933a3389b63d549a0beeb9490672" Jan 30 14:13:51 crc kubenswrapper[4793]: I0130 14:13:51.397893 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:13:51 crc kubenswrapper[4793]: E0130 14:13:51.399399 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:14:06 crc kubenswrapper[4793]: I0130 14:14:06.398596 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:14:06 crc kubenswrapper[4793]: E0130 14:14:06.399259 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:14:19 crc kubenswrapper[4793]: I0130 14:14:19.398528 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:14:19 crc kubenswrapper[4793]: E0130 14:14:19.399308 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:14:33 crc kubenswrapper[4793]: I0130 14:14:33.400089 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:14:33 crc kubenswrapper[4793]: E0130 14:14:33.401231 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:14:46 crc kubenswrapper[4793]: I0130 14:14:46.399345 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:14:46 crc kubenswrapper[4793]: E0130 14:14:46.400013 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:14:59 crc kubenswrapper[4793]: I0130 14:14:59.398785 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:14:59 crc kubenswrapper[4793]: E0130 14:14:59.399500 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.185360 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn"] Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.186731 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.188741 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.188746 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.327438 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn"] Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.361844 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea958b8-aeb8-4696-b604-f1459d6d5608-config-volume\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.361992 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dea958b8-aeb8-4696-b604-f1459d6d5608-secret-volume\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.362074 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96dfh\" (UniqueName: \"kubernetes.io/projected/dea958b8-aeb8-4696-b604-f1459d6d5608-kube-api-access-96dfh\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.464080 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96dfh\" (UniqueName: \"kubernetes.io/projected/dea958b8-aeb8-4696-b604-f1459d6d5608-kube-api-access-96dfh\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.464145 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea958b8-aeb8-4696-b604-f1459d6d5608-config-volume\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.464250 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dea958b8-aeb8-4696-b604-f1459d6d5608-secret-volume\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.467352 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea958b8-aeb8-4696-b604-f1459d6d5608-config-volume\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.472896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dea958b8-aeb8-4696-b604-f1459d6d5608-secret-volume\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.485613 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96dfh\" (UniqueName: \"kubernetes.io/projected/dea958b8-aeb8-4696-b604-f1459d6d5608-kube-api-access-96dfh\") pod \"collect-profiles-29496375-trbfn\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.507943 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:00 crc kubenswrapper[4793]: I0130 14:15:00.786953 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn"] Jan 30 14:15:01 crc kubenswrapper[4793]: I0130 14:15:01.632509 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" event={"ID":"dea958b8-aeb8-4696-b604-f1459d6d5608","Type":"ContainerStarted","Data":"169c63fb85351a767003e368e147b08afafad5a61c0c77bb947c35a8af5282ae"} Jan 30 14:15:01 crc kubenswrapper[4793]: I0130 14:15:01.632749 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" event={"ID":"dea958b8-aeb8-4696-b604-f1459d6d5608","Type":"ContainerStarted","Data":"4208e4c3725077003c23a3d4fbe0f314a927f813f20d0698586e821994c97e38"} Jan 30 14:15:01 crc kubenswrapper[4793]: I0130 14:15:01.653643 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" podStartSLOduration=1.6536207109999999 podStartE2EDuration="1.653620711s" podCreationTimestamp="2026-01-30 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:15:01.648035143 +0000 UTC m=+1912.349383644" watchObservedRunningTime="2026-01-30 14:15:01.653620711 +0000 UTC m=+1912.354969202" Jan 30 14:15:02 crc kubenswrapper[4793]: I0130 14:15:02.642436 4793 generic.go:334] "Generic (PLEG): container finished" podID="dea958b8-aeb8-4696-b604-f1459d6d5608" containerID="169c63fb85351a767003e368e147b08afafad5a61c0c77bb947c35a8af5282ae" exitCode=0 Jan 30 14:15:02 crc kubenswrapper[4793]: I0130 14:15:02.642484 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" event={"ID":"dea958b8-aeb8-4696-b604-f1459d6d5608","Type":"ContainerDied","Data":"169c63fb85351a767003e368e147b08afafad5a61c0c77bb947c35a8af5282ae"} Jan 30 14:15:03 crc kubenswrapper[4793]: I0130 14:15:03.993871 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.038809 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96dfh\" (UniqueName: \"kubernetes.io/projected/dea958b8-aeb8-4696-b604-f1459d6d5608-kube-api-access-96dfh\") pod \"dea958b8-aeb8-4696-b604-f1459d6d5608\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.038984 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea958b8-aeb8-4696-b604-f1459d6d5608-config-volume\") pod \"dea958b8-aeb8-4696-b604-f1459d6d5608\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.039016 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dea958b8-aeb8-4696-b604-f1459d6d5608-secret-volume\") pod \"dea958b8-aeb8-4696-b604-f1459d6d5608\" (UID: \"dea958b8-aeb8-4696-b604-f1459d6d5608\") " Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.046552 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dea958b8-aeb8-4696-b604-f1459d6d5608-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dea958b8-aeb8-4696-b604-f1459d6d5608" (UID: "dea958b8-aeb8-4696-b604-f1459d6d5608"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.050668 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dea958b8-aeb8-4696-b604-f1459d6d5608-config-volume" (OuterVolumeSpecName: "config-volume") pod "dea958b8-aeb8-4696-b604-f1459d6d5608" (UID: "dea958b8-aeb8-4696-b604-f1459d6d5608"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.064429 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea958b8-aeb8-4696-b604-f1459d6d5608-kube-api-access-96dfh" (OuterVolumeSpecName: "kube-api-access-96dfh") pod "dea958b8-aeb8-4696-b604-f1459d6d5608" (UID: "dea958b8-aeb8-4696-b604-f1459d6d5608"). InnerVolumeSpecName "kube-api-access-96dfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.141320 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96dfh\" (UniqueName: \"kubernetes.io/projected/dea958b8-aeb8-4696-b604-f1459d6d5608-kube-api-access-96dfh\") on node \"crc\" DevicePath \"\"" Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.141357 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea958b8-aeb8-4696-b604-f1459d6d5608-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.141366 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dea958b8-aeb8-4696-b604-f1459d6d5608-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.669848 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" event={"ID":"dea958b8-aeb8-4696-b604-f1459d6d5608","Type":"ContainerDied","Data":"4208e4c3725077003c23a3d4fbe0f314a927f813f20d0698586e821994c97e38"} Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.670346 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4208e4c3725077003c23a3d4fbe0f314a927f813f20d0698586e821994c97e38" Jan 30 14:15:04 crc kubenswrapper[4793]: I0130 14:15:04.669924 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn" Jan 30 14:15:10 crc kubenswrapper[4793]: I0130 14:15:10.407223 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:15:10 crc kubenswrapper[4793]: E0130 14:15:10.407931 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:15:17 crc kubenswrapper[4793]: I0130 14:15:17.219426 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-gbcdm"] Jan 30 14:15:17 crc kubenswrapper[4793]: I0130 14:15:17.225813 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-tq6pw"] Jan 30 14:15:17 crc kubenswrapper[4793]: I0130 14:15:17.247688 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-8pwcc"] Jan 30 14:15:17 crc kubenswrapper[4793]: I0130 14:15:17.259202 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-8pwcc"] Jan 30 14:15:17 crc kubenswrapper[4793]: I0130 14:15:17.268263 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-gbcdm"] Jan 30 14:15:17 crc kubenswrapper[4793]: I0130 14:15:17.276757 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-tq6pw"] Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.027181 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ff11-account-create-update-p5nhq"] Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.042370 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3a9f-account-create-update-zkbvj"] Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.052617 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-22a6-account-create-update-59kzd"] Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.063465 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ff11-account-create-update-p5nhq"] Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.071602 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3a9f-account-create-update-zkbvj"] Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.082591 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-22a6-account-create-update-59kzd"] Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.411549 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="563516b7-0256-4c05-b1d1-3aa03d692afb" path="/var/lib/kubelet/pods/563516b7-0256-4c05-b1d1-3aa03d692afb/volumes" Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.414399 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62fbb159-dc72-4c34-b2b7-5be6be4df981" path="/var/lib/kubelet/pods/62fbb159-dc72-4c34-b2b7-5be6be4df981/volumes" Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.415881 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d0f274e-c187-4f1a-aa78-508b1761f9fb" path="/var/lib/kubelet/pods/6d0f274e-c187-4f1a-aa78-508b1761f9fb/volumes" Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.417474 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98986ea8-62f3-4716-9451-0e13567ec2a1" path="/var/lib/kubelet/pods/98986ea8-62f3-4716-9451-0e13567ec2a1/volumes" Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.418471 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3f03641-1e63-4c88-a1f4-f58cf0d81883" path="/var/lib/kubelet/pods/b3f03641-1e63-4c88-a1f4-f58cf0d81883/volumes" Jan 30 14:15:18 crc kubenswrapper[4793]: I0130 14:15:18.420362 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f81f2e71-1a70-491f-ba0c-ad1a456345c8" path="/var/lib/kubelet/pods/f81f2e71-1a70-491f-ba0c-ad1a456345c8/volumes" Jan 30 14:15:22 crc kubenswrapper[4793]: I0130 14:15:22.399400 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:15:22 crc kubenswrapper[4793]: I0130 14:15:22.837642 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"c7109bad76c4800462c715a31fed08fa68ade41549aa0ee47196c92cb6ec6f9c"} Jan 30 14:15:36 crc kubenswrapper[4793]: I0130 14:15:36.052722 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ktlrj"] Jan 30 14:15:36 crc kubenswrapper[4793]: I0130 14:15:36.063620 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ktlrj"] Jan 30 14:15:36 crc kubenswrapper[4793]: I0130 14:15:36.413205 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec365c0b-f8d9-4b59-bb89-a583d1eb7257" path="/var/lib/kubelet/pods/ec365c0b-f8d9-4b59-bb89-a583d1eb7257/volumes" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.054974 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-29ee-account-create-update-56zfp"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.061517 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-t2ntm"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.078685 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-3f03-account-create-update-s5gbm"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.089473 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-ac9c-account-create-update-6cnjz"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.099555 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-t2ntm"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.108287 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-ac9c-account-create-update-6cnjz"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.115980 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-29ee-account-create-update-56zfp"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.124769 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-3f03-account-create-update-s5gbm"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.133417 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-gvh75"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.140951 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-gvh75"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.148801 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-89mld"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.155557 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-89mld"] Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.412635 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13613099-2932-4476-8032-82095348fb10" path="/var/lib/kubelet/pods/13613099-2932-4476-8032-82095348fb10/volumes" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.416016 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f786311-b5ef-427f-b167-c49267de28c6" path="/var/lib/kubelet/pods/1f786311-b5ef-427f-b167-c49267de28c6/volumes" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.420793 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2392ab6f-ca9b-4211-bd23-a243ce0ee554" path="/var/lib/kubelet/pods/2392ab6f-ca9b-4211-bd23-a243ce0ee554/volumes" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.424190 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c07a623-53fe-44a2-9810-5d1137c659c3" path="/var/lib/kubelet/pods/6c07a623-53fe-44a2-9810-5d1137c659c3/volumes" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.426617 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa3c464-d85c-4ea1-816e-7dda86dbb9de" path="/var/lib/kubelet/pods/bfa3c464-d85c-4ea1-816e-7dda86dbb9de/volumes" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.430542 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e00abb05-5932-47c8-9bd4-34014f966013" path="/var/lib/kubelet/pods/e00abb05-5932-47c8-9bd4-34014f966013/volumes" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.610306 4793 scope.go:117] "RemoveContainer" containerID="e076400efeb8dc1f3b157eb928b1925e404de84a86497e6441e959675b9ddf99" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.648390 4793 scope.go:117] "RemoveContainer" containerID="73aa5ec3639d3c82bba61c660ee7af7a234ef59082634808ca0ab14cf7b0d8b7" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.696427 4793 scope.go:117] "RemoveContainer" containerID="e2ff0ec9f064c9873b71344fa59a44b2ef666d7ccd24dbe878aa2ede8a23585c" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.741590 4793 scope.go:117] "RemoveContainer" containerID="b3caaa69aab524adb26fd9c4ff43996ac15d6994d1472ccaa076a079e9b6dba0" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.790135 4793 scope.go:117] "RemoveContainer" containerID="49617378d146339946d69a33ebd155e69d9eb4e257e62cbaa6d931330bc913ba" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.828761 4793 scope.go:117] "RemoveContainer" containerID="be7f675ca5c9219f83817d0e2dc9af6d1edad5191618166a3b580984eb47dd17" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.880570 4793 scope.go:117] "RemoveContainer" containerID="88e81edcf2367a38a7b0e1df9af6001a75b1047fd8c5d669cd70d0dad383c305" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.907158 4793 scope.go:117] "RemoveContainer" containerID="792c9fae56b3faf29df0bfe7bb192d950ab990e8d21594ce52765083cb10c12e" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.931626 4793 scope.go:117] "RemoveContainer" containerID="2bc34dab4f37d7b6429a87926db0d3a5178ff268821d2ee975bfe47cb007e77b" Jan 30 14:15:48 crc kubenswrapper[4793]: I0130 14:15:48.951251 4793 scope.go:117] "RemoveContainer" containerID="75d0a8131037e3e42e5261a0799894acdf4d57f9756c3dd89c681177ee69f801" Jan 30 14:15:49 crc kubenswrapper[4793]: I0130 14:15:49.002693 4793 scope.go:117] "RemoveContainer" containerID="43a04a7b0ede88204c3ce58512e165ac71ea34ba165695393273ca8c2ab37053" Jan 30 14:15:49 crc kubenswrapper[4793]: I0130 14:15:49.021875 4793 scope.go:117] "RemoveContainer" containerID="4a2aafe80408cac269537f00f3232599775bbba2b58f84e2c22d7bc9ff168a56" Jan 30 14:15:49 crc kubenswrapper[4793]: I0130 14:15:49.106959 4793 scope.go:117] "RemoveContainer" containerID="3efaeb1f3745caf5c2ff18e628906fd2ae05a6952ec9376aacd048e2c31a3cdb" Jan 30 14:15:54 crc kubenswrapper[4793]: I0130 14:15:54.048143 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-zbw76"] Jan 30 14:15:54 crc kubenswrapper[4793]: I0130 14:15:54.056556 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-zbw76"] Jan 30 14:15:54 crc kubenswrapper[4793]: I0130 14:15:54.414588 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caec468e-bf72-4c93-8b47-6aac4c7a0b3d" path="/var/lib/kubelet/pods/caec468e-bf72-4c93-8b47-6aac4c7a0b3d/volumes" Jan 30 14:15:57 crc kubenswrapper[4793]: I0130 14:15:57.033842 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-btxs9"] Jan 30 14:15:57 crc kubenswrapper[4793]: I0130 14:15:57.047270 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-btxs9"] Jan 30 14:15:58 crc kubenswrapper[4793]: I0130 14:15:58.409414 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b977757-3d3e-48e5-a1e2-d31ebeda138e" path="/var/lib/kubelet/pods/2b977757-3d3e-48e5-a1e2-d31ebeda138e/volumes" Jan 30 14:16:19 crc kubenswrapper[4793]: I0130 14:16:19.374078 4793 generic.go:334] "Generic (PLEG): container finished" podID="2ba6b544-0042-43d7-abe9-bc40439f804b" containerID="9c1a7842b45da0abe44314d798df617c5d0b04f46a40c3ce7525fbfda6de30dd" exitCode=0 Jan 30 14:16:19 crc kubenswrapper[4793]: I0130 14:16:19.374147 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" event={"ID":"2ba6b544-0042-43d7-abe9-bc40439f804b","Type":"ContainerDied","Data":"9c1a7842b45da0abe44314d798df617c5d0b04f46a40c3ce7525fbfda6de30dd"} Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.799130 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.841160 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-bootstrap-combined-ca-bundle\") pod \"2ba6b544-0042-43d7-abe9-bc40439f804b\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.841369 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-ssh-key-openstack-edpm-ipam\") pod \"2ba6b544-0042-43d7-abe9-bc40439f804b\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.841402 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-inventory\") pod \"2ba6b544-0042-43d7-abe9-bc40439f804b\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.841435 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s7wt\" (UniqueName: \"kubernetes.io/projected/2ba6b544-0042-43d7-abe9-bc40439f804b-kube-api-access-7s7wt\") pod \"2ba6b544-0042-43d7-abe9-bc40439f804b\" (UID: \"2ba6b544-0042-43d7-abe9-bc40439f804b\") " Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.856506 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba6b544-0042-43d7-abe9-bc40439f804b-kube-api-access-7s7wt" (OuterVolumeSpecName: "kube-api-access-7s7wt") pod "2ba6b544-0042-43d7-abe9-bc40439f804b" (UID: "2ba6b544-0042-43d7-abe9-bc40439f804b"). InnerVolumeSpecName "kube-api-access-7s7wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.856883 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2ba6b544-0042-43d7-abe9-bc40439f804b" (UID: "2ba6b544-0042-43d7-abe9-bc40439f804b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.870000 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-inventory" (OuterVolumeSpecName: "inventory") pod "2ba6b544-0042-43d7-abe9-bc40439f804b" (UID: "2ba6b544-0042-43d7-abe9-bc40439f804b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.901698 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2ba6b544-0042-43d7-abe9-bc40439f804b" (UID: "2ba6b544-0042-43d7-abe9-bc40439f804b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.943826 4793 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.943875 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.943890 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ba6b544-0042-43d7-abe9-bc40439f804b-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:16:20 crc kubenswrapper[4793]: I0130 14:16:20.943901 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s7wt\" (UniqueName: \"kubernetes.io/projected/2ba6b544-0042-43d7-abe9-bc40439f804b-kube-api-access-7s7wt\") on node \"crc\" DevicePath \"\"" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.392382 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" event={"ID":"2ba6b544-0042-43d7-abe9-bc40439f804b","Type":"ContainerDied","Data":"b0a20486d3bd914ea9a743f522b5e81673abd5990bf5c761a63ac5098352d1ae"} Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.392426 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0a20486d3bd914ea9a743f522b5e81673abd5990bf5c761a63ac5098352d1ae" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.392495 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.509276 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn"] Jan 30 14:16:21 crc kubenswrapper[4793]: E0130 14:16:21.509645 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea958b8-aeb8-4696-b604-f1459d6d5608" containerName="collect-profiles" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.509658 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea958b8-aeb8-4696-b604-f1459d6d5608" containerName="collect-profiles" Jan 30 14:16:21 crc kubenswrapper[4793]: E0130 14:16:21.509678 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba6b544-0042-43d7-abe9-bc40439f804b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.509685 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba6b544-0042-43d7-abe9-bc40439f804b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.511200 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea958b8-aeb8-4696-b604-f1459d6d5608" containerName="collect-profiles" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.511237 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba6b544-0042-43d7-abe9-bc40439f804b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.511847 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.514261 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.519975 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.520112 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.520300 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.533227 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn"] Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.572976 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.573523 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.573784 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk6ql\" (UniqueName: \"kubernetes.io/projected/f1632f4b-e0e5-4069-a77b-ae4f1911869b-kube-api-access-sk6ql\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.679901 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.679988 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk6ql\" (UniqueName: \"kubernetes.io/projected/f1632f4b-e0e5-4069-a77b-ae4f1911869b-kube-api-access-sk6ql\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.680081 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.683891 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.691578 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.696611 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk6ql\" (UniqueName: \"kubernetes.io/projected/f1632f4b-e0e5-4069-a77b-ae4f1911869b-kube-api-access-sk6ql\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qgztn\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:21 crc kubenswrapper[4793]: I0130 14:16:21.882015 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:16:22 crc kubenswrapper[4793]: I0130 14:16:22.381814 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn"] Jan 30 14:16:22 crc kubenswrapper[4793]: I0130 14:16:22.384765 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:16:22 crc kubenswrapper[4793]: I0130 14:16:22.408346 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" event={"ID":"f1632f4b-e0e5-4069-a77b-ae4f1911869b","Type":"ContainerStarted","Data":"4f82d849edc1d49a6b3562c2709f3f78a78f51f4b85225f15283609622841135"} Jan 30 14:16:23 crc kubenswrapper[4793]: I0130 14:16:23.416965 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" event={"ID":"f1632f4b-e0e5-4069-a77b-ae4f1911869b","Type":"ContainerStarted","Data":"23e76aba0770af4205b13b6be7f728153ae9d3e1a0ab347b0af1c9d3bfcaa979"} Jan 30 14:16:23 crc kubenswrapper[4793]: I0130 14:16:23.440795 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" podStartSLOduration=1.9378308400000002 podStartE2EDuration="2.440777308s" podCreationTimestamp="2026-01-30 14:16:21 +0000 UTC" firstStartedPulling="2026-01-30 14:16:22.384528164 +0000 UTC m=+1993.085876655" lastFinishedPulling="2026-01-30 14:16:22.887474632 +0000 UTC m=+1993.588823123" observedRunningTime="2026-01-30 14:16:23.439792514 +0000 UTC m=+1994.141141005" watchObservedRunningTime="2026-01-30 14:16:23.440777308 +0000 UTC m=+1994.142125799" Jan 30 14:16:49 crc kubenswrapper[4793]: I0130 14:16:49.465719 4793 scope.go:117] "RemoveContainer" containerID="2ab3f639f24308ca232423f0a32206d071a1ba8c33f3edef5fde8eec5d078500" Jan 30 14:16:49 crc kubenswrapper[4793]: I0130 14:16:49.506148 4793 scope.go:117] "RemoveContainer" containerID="aba07025654ae635089a8f296dddf9cfb274c709f33abf63aa5399408783166c" Jan 30 14:17:06 crc kubenswrapper[4793]: I0130 14:17:06.045581 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-k4pgl"] Jan 30 14:17:06 crc kubenswrapper[4793]: I0130 14:17:06.053623 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-k4pgl"] Jan 30 14:17:06 crc kubenswrapper[4793]: I0130 14:17:06.415490 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8ea0161-c696-4578-a6f7-285a4253dc0f" path="/var/lib/kubelet/pods/b8ea0161-c696-4578-a6f7-285a4253dc0f/volumes" Jan 30 14:17:11 crc kubenswrapper[4793]: I0130 14:17:11.033952 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-kkrt6"] Jan 30 14:17:11 crc kubenswrapper[4793]: I0130 14:17:11.045027 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-kkrt6"] Jan 30 14:17:12 crc kubenswrapper[4793]: I0130 14:17:12.409775 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644bf4c3-aaaf-45fa-9692-73406a657226" path="/var/lib/kubelet/pods/644bf4c3-aaaf-45fa-9692-73406a657226/volumes" Jan 30 14:17:14 crc kubenswrapper[4793]: I0130 14:17:14.031683 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-gpt4t"] Jan 30 14:17:14 crc kubenswrapper[4793]: I0130 14:17:14.040636 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-gpt4t"] Jan 30 14:17:14 crc kubenswrapper[4793]: I0130 14:17:14.410119 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="126207f4-9b13-4892-aa15-0616a488af8c" path="/var/lib/kubelet/pods/126207f4-9b13-4892-aa15-0616a488af8c/volumes" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.462783 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mbmz8"] Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.464912 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.472740 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mbmz8"] Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.641401 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmskr\" (UniqueName: \"kubernetes.io/projected/8e44d38b-8b51-4589-bc6a-e69a004b83f6-kube-api-access-tmskr\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.641571 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-catalog-content\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.641605 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-utilities\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.743561 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-catalog-content\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.743901 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-utilities\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.744027 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmskr\" (UniqueName: \"kubernetes.io/projected/8e44d38b-8b51-4589-bc6a-e69a004b83f6-kube-api-access-tmskr\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.744108 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-catalog-content\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.744374 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-utilities\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:17 crc kubenswrapper[4793]: I0130 14:17:17.768736 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmskr\" (UniqueName: \"kubernetes.io/projected/8e44d38b-8b51-4589-bc6a-e69a004b83f6-kube-api-access-tmskr\") pod \"community-operators-mbmz8\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:18 crc kubenswrapper[4793]: I0130 14:17:18.031696 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:18 crc kubenswrapper[4793]: I0130 14:17:18.319840 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mbmz8"] Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.058016 4793 generic.go:334] "Generic (PLEG): container finished" podID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerID="13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e" exitCode=0 Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.058142 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbmz8" event={"ID":"8e44d38b-8b51-4589-bc6a-e69a004b83f6","Type":"ContainerDied","Data":"13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e"} Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.058167 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbmz8" event={"ID":"8e44d38b-8b51-4589-bc6a-e69a004b83f6","Type":"ContainerStarted","Data":"8541f4e5dad7feb52e06e419a4a0323b953c46b0cd2b983f0cc2f7e0dc8bba8e"} Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.264008 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9jf58"] Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.266882 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.283788 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jf58"] Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.365303 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-utilities\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.365390 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b8f8\" (UniqueName: \"kubernetes.io/projected/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-kube-api-access-7b8f8\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.365435 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-catalog-content\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.467360 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b8f8\" (UniqueName: \"kubernetes.io/projected/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-kube-api-access-7b8f8\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.467425 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-catalog-content\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.467575 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-utilities\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.468179 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-utilities\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.468762 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-catalog-content\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.488279 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b8f8\" (UniqueName: \"kubernetes.io/projected/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-kube-api-access-7b8f8\") pod \"redhat-marketplace-9jf58\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.583369 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.891744 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lb62l"] Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.905496 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.976131 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lb62l"] Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.977384 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62742\" (UniqueName: \"kubernetes.io/projected/4d85b4c3-8b96-424c-a7f0-82257f2af0da-kube-api-access-62742\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.977455 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-utilities\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:19 crc kubenswrapper[4793]: I0130 14:17:19.977669 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-catalog-content\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.069363 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbmz8" event={"ID":"8e44d38b-8b51-4589-bc6a-e69a004b83f6","Type":"ContainerStarted","Data":"710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c"} Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.082252 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62742\" (UniqueName: \"kubernetes.io/projected/4d85b4c3-8b96-424c-a7f0-82257f2af0da-kube-api-access-62742\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.082352 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-utilities\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.082445 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-catalog-content\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.082965 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-catalog-content\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.083759 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-utilities\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.111732 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62742\" (UniqueName: \"kubernetes.io/projected/4d85b4c3-8b96-424c-a7f0-82257f2af0da-kube-api-access-62742\") pod \"redhat-operators-lb62l\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:20 crc kubenswrapper[4793]: W0130 14:17:20.172508 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbd8c0f6_66a2_4eeb_889c_31dd7d8d8606.slice/crio-0726302ccbcd7f3c1d2adba2dc46be2001566bcb486632de14c89447ec6cb950 WatchSource:0}: Error finding container 0726302ccbcd7f3c1d2adba2dc46be2001566bcb486632de14c89447ec6cb950: Status 404 returned error can't find the container with id 0726302ccbcd7f3c1d2adba2dc46be2001566bcb486632de14c89447ec6cb950 Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.174200 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jf58"] Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.288794 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:20 crc kubenswrapper[4793]: I0130 14:17:20.791324 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lb62l"] Jan 30 14:17:21 crc kubenswrapper[4793]: I0130 14:17:21.078742 4793 generic.go:334] "Generic (PLEG): container finished" podID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerID="97e00f686b282180edd4c6895080d4ff4fea6b3dd37684dbd36be6025541ffd0" exitCode=0 Jan 30 14:17:21 crc kubenswrapper[4793]: I0130 14:17:21.078800 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jf58" event={"ID":"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606","Type":"ContainerDied","Data":"97e00f686b282180edd4c6895080d4ff4fea6b3dd37684dbd36be6025541ffd0"} Jan 30 14:17:21 crc kubenswrapper[4793]: I0130 14:17:21.078876 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jf58" event={"ID":"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606","Type":"ContainerStarted","Data":"0726302ccbcd7f3c1d2adba2dc46be2001566bcb486632de14c89447ec6cb950"} Jan 30 14:17:21 crc kubenswrapper[4793]: I0130 14:17:21.082121 4793 generic.go:334] "Generic (PLEG): container finished" podID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerID="e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5" exitCode=0 Jan 30 14:17:21 crc kubenswrapper[4793]: I0130 14:17:21.082164 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb62l" event={"ID":"4d85b4c3-8b96-424c-a7f0-82257f2af0da","Type":"ContainerDied","Data":"e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5"} Jan 30 14:17:21 crc kubenswrapper[4793]: I0130 14:17:21.082209 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb62l" event={"ID":"4d85b4c3-8b96-424c-a7f0-82257f2af0da","Type":"ContainerStarted","Data":"ee99dc24d6773b1ef81ef15f8abc22453a691035e3bb9cf3a583bb3c23f8c1e4"} Jan 30 14:17:22 crc kubenswrapper[4793]: I0130 14:17:22.086532 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-4rknj"] Jan 30 14:17:22 crc kubenswrapper[4793]: I0130 14:17:22.095592 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-4rknj"] Jan 30 14:17:22 crc kubenswrapper[4793]: I0130 14:17:22.408604 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f55384b1-b1fd-43eb-8c8d-73398a8f2ecd" path="/var/lib/kubelet/pods/f55384b1-b1fd-43eb-8c8d-73398a8f2ecd/volumes" Jan 30 14:17:23 crc kubenswrapper[4793]: I0130 14:17:23.107868 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jf58" event={"ID":"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606","Type":"ContainerStarted","Data":"87ada9a6b5346c7032748aa17aea82f42d27a30601825dfb46499a4bfb7bf949"} Jan 30 14:17:23 crc kubenswrapper[4793]: I0130 14:17:23.110457 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb62l" event={"ID":"4d85b4c3-8b96-424c-a7f0-82257f2af0da","Type":"ContainerStarted","Data":"9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9"} Jan 30 14:17:23 crc kubenswrapper[4793]: I0130 14:17:23.112849 4793 generic.go:334] "Generic (PLEG): container finished" podID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerID="710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c" exitCode=0 Jan 30 14:17:23 crc kubenswrapper[4793]: I0130 14:17:23.112893 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbmz8" event={"ID":"8e44d38b-8b51-4589-bc6a-e69a004b83f6","Type":"ContainerDied","Data":"710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c"} Jan 30 14:17:25 crc kubenswrapper[4793]: I0130 14:17:25.130520 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbmz8" event={"ID":"8e44d38b-8b51-4589-bc6a-e69a004b83f6","Type":"ContainerStarted","Data":"4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227"} Jan 30 14:17:25 crc kubenswrapper[4793]: I0130 14:17:25.133506 4793 generic.go:334] "Generic (PLEG): container finished" podID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerID="87ada9a6b5346c7032748aa17aea82f42d27a30601825dfb46499a4bfb7bf949" exitCode=0 Jan 30 14:17:25 crc kubenswrapper[4793]: I0130 14:17:25.133552 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jf58" event={"ID":"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606","Type":"ContainerDied","Data":"87ada9a6b5346c7032748aa17aea82f42d27a30601825dfb46499a4bfb7bf949"} Jan 30 14:17:25 crc kubenswrapper[4793]: I0130 14:17:25.150318 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mbmz8" podStartSLOduration=3.260544982 podStartE2EDuration="8.150297829s" podCreationTimestamp="2026-01-30 14:17:17 +0000 UTC" firstStartedPulling="2026-01-30 14:17:19.061017653 +0000 UTC m=+2049.762366144" lastFinishedPulling="2026-01-30 14:17:23.9507705 +0000 UTC m=+2054.652118991" observedRunningTime="2026-01-30 14:17:25.14909081 +0000 UTC m=+2055.850439301" watchObservedRunningTime="2026-01-30 14:17:25.150297829 +0000 UTC m=+2055.851646320" Jan 30 14:17:26 crc kubenswrapper[4793]: I0130 14:17:26.143663 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jf58" event={"ID":"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606","Type":"ContainerStarted","Data":"085807c590a6db119c8b09a9c636c0a0db1e0e333c8a025332a79e249f76032c"} Jan 30 14:17:26 crc kubenswrapper[4793]: I0130 14:17:26.173757 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9jf58" podStartSLOduration=2.7055159399999997 podStartE2EDuration="7.173739939s" podCreationTimestamp="2026-01-30 14:17:19 +0000 UTC" firstStartedPulling="2026-01-30 14:17:21.080299175 +0000 UTC m=+2051.781647666" lastFinishedPulling="2026-01-30 14:17:25.548523174 +0000 UTC m=+2056.249871665" observedRunningTime="2026-01-30 14:17:26.171513986 +0000 UTC m=+2056.872862467" watchObservedRunningTime="2026-01-30 14:17:26.173739939 +0000 UTC m=+2056.875088430" Jan 30 14:17:28 crc kubenswrapper[4793]: I0130 14:17:28.033489 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:28 crc kubenswrapper[4793]: I0130 14:17:28.033604 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:28 crc kubenswrapper[4793]: I0130 14:17:28.092758 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:29 crc kubenswrapper[4793]: I0130 14:17:29.585137 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:29 crc kubenswrapper[4793]: I0130 14:17:29.585190 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:30 crc kubenswrapper[4793]: I0130 14:17:30.636886 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-9jf58" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="registry-server" probeResult="failure" output=< Jan 30 14:17:30 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:17:30 crc kubenswrapper[4793]: > Jan 30 14:17:33 crc kubenswrapper[4793]: I0130 14:17:33.206445 4793 generic.go:334] "Generic (PLEG): container finished" podID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerID="9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9" exitCode=0 Jan 30 14:17:33 crc kubenswrapper[4793]: I0130 14:17:33.206556 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb62l" event={"ID":"4d85b4c3-8b96-424c-a7f0-82257f2af0da","Type":"ContainerDied","Data":"9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9"} Jan 30 14:17:35 crc kubenswrapper[4793]: I0130 14:17:35.227497 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb62l" event={"ID":"4d85b4c3-8b96-424c-a7f0-82257f2af0da","Type":"ContainerStarted","Data":"bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff"} Jan 30 14:17:35 crc kubenswrapper[4793]: I0130 14:17:35.249273 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lb62l" podStartSLOduration=3.107478985 podStartE2EDuration="16.249248191s" podCreationTimestamp="2026-01-30 14:17:19 +0000 UTC" firstStartedPulling="2026-01-30 14:17:21.084504816 +0000 UTC m=+2051.785853297" lastFinishedPulling="2026-01-30 14:17:34.226274002 +0000 UTC m=+2064.927622503" observedRunningTime="2026-01-30 14:17:35.246637617 +0000 UTC m=+2065.947986148" watchObservedRunningTime="2026-01-30 14:17:35.249248191 +0000 UTC m=+2065.950596722" Jan 30 14:17:38 crc kubenswrapper[4793]: I0130 14:17:38.035925 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9k2k7"] Jan 30 14:17:38 crc kubenswrapper[4793]: I0130 14:17:38.050871 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9k2k7"] Jan 30 14:17:38 crc kubenswrapper[4793]: I0130 14:17:38.090634 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:38 crc kubenswrapper[4793]: I0130 14:17:38.137129 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mbmz8"] Jan 30 14:17:38 crc kubenswrapper[4793]: I0130 14:17:38.254389 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mbmz8" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerName="registry-server" containerID="cri-o://4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227" gracePeriod=2 Jan 30 14:17:38 crc kubenswrapper[4793]: I0130 14:17:38.413558 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16a2a816-c28c-4d74-848a-2821a9d68d70" path="/var/lib/kubelet/pods/16a2a816-c28c-4d74-848a-2821a9d68d70/volumes" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.071817 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.183807 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-catalog-content\") pod \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.183907 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmskr\" (UniqueName: \"kubernetes.io/projected/8e44d38b-8b51-4589-bc6a-e69a004b83f6-kube-api-access-tmskr\") pod \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.183996 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-utilities\") pod \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\" (UID: \"8e44d38b-8b51-4589-bc6a-e69a004b83f6\") " Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.184689 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-utilities" (OuterVolumeSpecName: "utilities") pod "8e44d38b-8b51-4589-bc6a-e69a004b83f6" (UID: "8e44d38b-8b51-4589-bc6a-e69a004b83f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.199247 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e44d38b-8b51-4589-bc6a-e69a004b83f6-kube-api-access-tmskr" (OuterVolumeSpecName: "kube-api-access-tmskr") pod "8e44d38b-8b51-4589-bc6a-e69a004b83f6" (UID: "8e44d38b-8b51-4589-bc6a-e69a004b83f6"). InnerVolumeSpecName "kube-api-access-tmskr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.256276 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e44d38b-8b51-4589-bc6a-e69a004b83f6" (UID: "8e44d38b-8b51-4589-bc6a-e69a004b83f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.264312 4793 generic.go:334] "Generic (PLEG): container finished" podID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerID="4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227" exitCode=0 Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.264368 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbmz8" event={"ID":"8e44d38b-8b51-4589-bc6a-e69a004b83f6","Type":"ContainerDied","Data":"4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227"} Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.264394 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbmz8" event={"ID":"8e44d38b-8b51-4589-bc6a-e69a004b83f6","Type":"ContainerDied","Data":"8541f4e5dad7feb52e06e419a4a0323b953c46b0cd2b983f0cc2f7e0dc8bba8e"} Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.264410 4793 scope.go:117] "RemoveContainer" containerID="4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.264635 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mbmz8" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.287309 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.287350 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmskr\" (UniqueName: \"kubernetes.io/projected/8e44d38b-8b51-4589-bc6a-e69a004b83f6-kube-api-access-tmskr\") on node \"crc\" DevicePath \"\"" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.287364 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e44d38b-8b51-4589-bc6a-e69a004b83f6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.298278 4793 scope.go:117] "RemoveContainer" containerID="710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.326875 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mbmz8"] Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.333101 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mbmz8"] Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.412205 4793 scope.go:117] "RemoveContainer" containerID="13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.514122 4793 scope.go:117] "RemoveContainer" containerID="4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227" Jan 30 14:17:39 crc kubenswrapper[4793]: E0130 14:17:39.514537 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227\": container with ID starting with 4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227 not found: ID does not exist" containerID="4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.514649 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227"} err="failed to get container status \"4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227\": rpc error: code = NotFound desc = could not find container \"4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227\": container with ID starting with 4bb7f5f45e5689403aebbcc201baa7304fa0475d1297dd0f45487057acbc7227 not found: ID does not exist" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.514775 4793 scope.go:117] "RemoveContainer" containerID="710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c" Jan 30 14:17:39 crc kubenswrapper[4793]: E0130 14:17:39.515343 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c\": container with ID starting with 710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c not found: ID does not exist" containerID="710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.515374 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c"} err="failed to get container status \"710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c\": rpc error: code = NotFound desc = could not find container \"710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c\": container with ID starting with 710e5a1a916d14428a23b302b582c293d127d647c6f64947fbd6d302fe7b1a4c not found: ID does not exist" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.515387 4793 scope.go:117] "RemoveContainer" containerID="13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e" Jan 30 14:17:39 crc kubenswrapper[4793]: E0130 14:17:39.515627 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e\": container with ID starting with 13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e not found: ID does not exist" containerID="13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e" Jan 30 14:17:39 crc kubenswrapper[4793]: I0130 14:17:39.515692 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e"} err="failed to get container status \"13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e\": rpc error: code = NotFound desc = could not find container \"13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e\": container with ID starting with 13b6e1ccd9ebe5b13c32404d92314070f7bf8185fc72920cc95f402599e5055e not found: ID does not exist" Jan 30 14:17:40 crc kubenswrapper[4793]: I0130 14:17:40.290601 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:40 crc kubenswrapper[4793]: I0130 14:17:40.292145 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:17:40 crc kubenswrapper[4793]: I0130 14:17:40.413115 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" path="/var/lib/kubelet/pods/8e44d38b-8b51-4589-bc6a-e69a004b83f6/volumes" Jan 30 14:17:40 crc kubenswrapper[4793]: I0130 14:17:40.636097 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-9jf58" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="registry-server" probeResult="failure" output=< Jan 30 14:17:40 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:17:40 crc kubenswrapper[4793]: > Jan 30 14:17:41 crc kubenswrapper[4793]: I0130 14:17:41.342152 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lb62l" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="registry-server" probeResult="failure" output=< Jan 30 14:17:41 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:17:41 crc kubenswrapper[4793]: > Jan 30 14:17:42 crc kubenswrapper[4793]: I0130 14:17:42.414098 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:17:42 crc kubenswrapper[4793]: I0130 14:17:42.414399 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:17:49 crc kubenswrapper[4793]: I0130 14:17:49.634320 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:49 crc kubenswrapper[4793]: I0130 14:17:49.671279 4793 scope.go:117] "RemoveContainer" containerID="f6239492972507362decef8f67d6e0f6bc2cfcc0fcc4cf32f831f0f6c07c0017" Jan 30 14:17:49 crc kubenswrapper[4793]: I0130 14:17:49.697670 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:49 crc kubenswrapper[4793]: I0130 14:17:49.733387 4793 scope.go:117] "RemoveContainer" containerID="3517173292e25a5ef43fbeee36943507781e2a1f6b290f89494c3211b1e796ba" Jan 30 14:17:49 crc kubenswrapper[4793]: I0130 14:17:49.763290 4793 scope.go:117] "RemoveContainer" containerID="32ceb7dc9fa876395c4ca9e0e8f70660c79f4304088a586ce49eb1e832993592" Jan 30 14:17:49 crc kubenswrapper[4793]: I0130 14:17:49.819249 4793 scope.go:117] "RemoveContainer" containerID="ae10414b3d00dc4ceb2bc58d35069ffd261cdc4f3583eb5ebdf5decfcf70c2e6" Jan 30 14:17:49 crc kubenswrapper[4793]: I0130 14:17:49.944754 4793 scope.go:117] "RemoveContainer" containerID="bff2e9040ab8d382d57ee633ed0d4b720e96e3be65ded6621d8b7a51d1e715d7" Jan 30 14:17:51 crc kubenswrapper[4793]: I0130 14:17:51.066615 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jf58"] Jan 30 14:17:51 crc kubenswrapper[4793]: I0130 14:17:51.339138 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lb62l" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="registry-server" probeResult="failure" output=< Jan 30 14:17:51 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:17:51 crc kubenswrapper[4793]: > Jan 30 14:17:51 crc kubenswrapper[4793]: I0130 14:17:51.375981 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9jf58" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="registry-server" containerID="cri-o://085807c590a6db119c8b09a9c636c0a0db1e0e333c8a025332a79e249f76032c" gracePeriod=2 Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.390371 4793 generic.go:334] "Generic (PLEG): container finished" podID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerID="085807c590a6db119c8b09a9c636c0a0db1e0e333c8a025332a79e249f76032c" exitCode=0 Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.390609 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jf58" event={"ID":"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606","Type":"ContainerDied","Data":"085807c590a6db119c8b09a9c636c0a0db1e0e333c8a025332a79e249f76032c"} Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.390637 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jf58" event={"ID":"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606","Type":"ContainerDied","Data":"0726302ccbcd7f3c1d2adba2dc46be2001566bcb486632de14c89447ec6cb950"} Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.390659 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0726302ccbcd7f3c1d2adba2dc46be2001566bcb486632de14c89447ec6cb950" Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.428831 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.556520 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b8f8\" (UniqueName: \"kubernetes.io/projected/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-kube-api-access-7b8f8\") pod \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.556907 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-utilities\") pod \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.557366 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-catalog-content\") pod \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\" (UID: \"cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606\") " Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.557676 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-utilities" (OuterVolumeSpecName: "utilities") pod "cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" (UID: "cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.558188 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.564358 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-kube-api-access-7b8f8" (OuterVolumeSpecName: "kube-api-access-7b8f8") pod "cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" (UID: "cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606"). InnerVolumeSpecName "kube-api-access-7b8f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.585611 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" (UID: "cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.660370 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b8f8\" (UniqueName: \"kubernetes.io/projected/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-kube-api-access-7b8f8\") on node \"crc\" DevicePath \"\"" Jan 30 14:17:52 crc kubenswrapper[4793]: I0130 14:17:52.660405 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:17:53 crc kubenswrapper[4793]: I0130 14:17:53.397151 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jf58" Jan 30 14:17:53 crc kubenswrapper[4793]: I0130 14:17:53.432134 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jf58"] Jan 30 14:17:53 crc kubenswrapper[4793]: I0130 14:17:53.443017 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jf58"] Jan 30 14:17:54 crc kubenswrapper[4793]: I0130 14:17:54.411369 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" path="/var/lib/kubelet/pods/cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606/volumes" Jan 30 14:18:01 crc kubenswrapper[4793]: I0130 14:18:01.341537 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lb62l" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="registry-server" probeResult="failure" output=< Jan 30 14:18:01 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:18:01 crc kubenswrapper[4793]: > Jan 30 14:18:09 crc kubenswrapper[4793]: I0130 14:18:09.550167 4793 generic.go:334] "Generic (PLEG): container finished" podID="f1632f4b-e0e5-4069-a77b-ae4f1911869b" containerID="23e76aba0770af4205b13b6be7f728153ae9d3e1a0ab347b0af1c9d3bfcaa979" exitCode=0 Jan 30 14:18:09 crc kubenswrapper[4793]: I0130 14:18:09.550228 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" event={"ID":"f1632f4b-e0e5-4069-a77b-ae4f1911869b","Type":"ContainerDied","Data":"23e76aba0770af4205b13b6be7f728153ae9d3e1a0ab347b0af1c9d3bfcaa979"} Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.053905 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.218567 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-ssh-key-openstack-edpm-ipam\") pod \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.218618 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-inventory\") pod \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.218661 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk6ql\" (UniqueName: \"kubernetes.io/projected/f1632f4b-e0e5-4069-a77b-ae4f1911869b-kube-api-access-sk6ql\") pod \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\" (UID: \"f1632f4b-e0e5-4069-a77b-ae4f1911869b\") " Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.226011 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1632f4b-e0e5-4069-a77b-ae4f1911869b-kube-api-access-sk6ql" (OuterVolumeSpecName: "kube-api-access-sk6ql") pod "f1632f4b-e0e5-4069-a77b-ae4f1911869b" (UID: "f1632f4b-e0e5-4069-a77b-ae4f1911869b"). InnerVolumeSpecName "kube-api-access-sk6ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.256519 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f1632f4b-e0e5-4069-a77b-ae4f1911869b" (UID: "f1632f4b-e0e5-4069-a77b-ae4f1911869b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.261416 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-inventory" (OuterVolumeSpecName: "inventory") pod "f1632f4b-e0e5-4069-a77b-ae4f1911869b" (UID: "f1632f4b-e0e5-4069-a77b-ae4f1911869b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.321353 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.321391 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1632f4b-e0e5-4069-a77b-ae4f1911869b-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.321403 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk6ql\" (UniqueName: \"kubernetes.io/projected/f1632f4b-e0e5-4069-a77b-ae4f1911869b-kube-api-access-sk6ql\") on node \"crc\" DevicePath \"\"" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.341191 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lb62l" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="registry-server" probeResult="failure" output=< Jan 30 14:18:11 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:18:11 crc kubenswrapper[4793]: > Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.589722 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" event={"ID":"f1632f4b-e0e5-4069-a77b-ae4f1911869b","Type":"ContainerDied","Data":"4f82d849edc1d49a6b3562c2709f3f78a78f51f4b85225f15283609622841135"} Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.589776 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f82d849edc1d49a6b3562c2709f3f78a78f51f4b85225f15283609622841135" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.589800 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qgztn" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.648496 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc"] Jan 30 14:18:11 crc kubenswrapper[4793]: E0130 14:18:11.649228 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1632f4b-e0e5-4069-a77b-ae4f1911869b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.649310 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1632f4b-e0e5-4069-a77b-ae4f1911869b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 14:18:11 crc kubenswrapper[4793]: E0130 14:18:11.649429 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerName="extract-utilities" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.649536 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerName="extract-utilities" Jan 30 14:18:11 crc kubenswrapper[4793]: E0130 14:18:11.649639 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="extract-content" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.649713 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="extract-content" Jan 30 14:18:11 crc kubenswrapper[4793]: E0130 14:18:11.649786 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerName="extract-content" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.649839 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerName="extract-content" Jan 30 14:18:11 crc kubenswrapper[4793]: E0130 14:18:11.649899 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="registry-server" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.649958 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="registry-server" Jan 30 14:18:11 crc kubenswrapper[4793]: E0130 14:18:11.650025 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="extract-utilities" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.650096 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="extract-utilities" Jan 30 14:18:11 crc kubenswrapper[4793]: E0130 14:18:11.650163 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerName="registry-server" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.650242 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerName="registry-server" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.650489 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1632f4b-e0e5-4069-a77b-ae4f1911869b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.650594 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd8c0f6-66a2-4eeb-889c-31dd7d8d8606" containerName="registry-server" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.650659 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e44d38b-8b51-4589-bc6a-e69a004b83f6" containerName="registry-server" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.651318 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.656907 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.656954 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.657003 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.657177 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.665216 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc"] Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.729042 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7dcb\" (UniqueName: \"kubernetes.io/projected/260f1ea9-6ba5-40aa-ab56-e95237cb1009-kube-api-access-v7dcb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.729170 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.729202 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.831455 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.831525 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.831661 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7dcb\" (UniqueName: \"kubernetes.io/projected/260f1ea9-6ba5-40aa-ab56-e95237cb1009-kube-api-access-v7dcb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.837192 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.838200 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.852300 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7dcb\" (UniqueName: \"kubernetes.io/projected/260f1ea9-6ba5-40aa-ab56-e95237cb1009-kube-api-access-v7dcb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:11 crc kubenswrapper[4793]: I0130 14:18:11.972909 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:18:12 crc kubenswrapper[4793]: I0130 14:18:12.414229 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:18:12 crc kubenswrapper[4793]: I0130 14:18:12.414485 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:18:12 crc kubenswrapper[4793]: I0130 14:18:12.515400 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc"] Jan 30 14:18:12 crc kubenswrapper[4793]: I0130 14:18:12.599989 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" event={"ID":"260f1ea9-6ba5-40aa-ab56-e95237cb1009","Type":"ContainerStarted","Data":"bcd3b8c67e7c3da4fa975f67cb3075ff012ce7cd853f89c9542d544b042c3436"} Jan 30 14:18:13 crc kubenswrapper[4793]: I0130 14:18:13.054836 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-6ttpt"] Jan 30 14:18:13 crc kubenswrapper[4793]: I0130 14:18:13.072083 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-a772-account-create-update-4n7jm"] Jan 30 14:18:13 crc kubenswrapper[4793]: I0130 14:18:13.088856 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-n6kxs"] Jan 30 14:18:13 crc kubenswrapper[4793]: I0130 14:18:13.104876 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-k8j4t"] Jan 30 14:18:13 crc kubenswrapper[4793]: I0130 14:18:13.117281 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-a772-account-create-update-4n7jm"] Jan 30 14:18:13 crc kubenswrapper[4793]: I0130 14:18:13.130674 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-n6kxs"] Jan 30 14:18:13 crc kubenswrapper[4793]: I0130 14:18:13.140840 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-6ttpt"] Jan 30 14:18:13 crc kubenswrapper[4793]: I0130 14:18:13.151694 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-k8j4t"] Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.035825 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e189-account-create-update-hp64h"] Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.049023 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-e189-account-create-update-hp64h"] Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.062524 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-5737-account-create-update-7wpgl"] Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.073332 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-5737-account-create-update-7wpgl"] Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.409686 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20523849-0caa-42b2-9b52-d5661f90ea95" path="/var/lib/kubelet/pods/20523849-0caa-42b2-9b52-d5661f90ea95/volumes" Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.410922 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22f1b95b-bf17-486c-a4b0-0a2aa96cf847" path="/var/lib/kubelet/pods/22f1b95b-bf17-486c-a4b0-0a2aa96cf847/volumes" Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.411792 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a263a6b-c717-4bb9-ae46-edfd534e347f" path="/var/lib/kubelet/pods/6a263a6b-c717-4bb9-ae46-edfd534e347f/volumes" Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.412543 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ec3637c-09ef-47f6-bce5-dcc3f4d6e167" path="/var/lib/kubelet/pods/8ec3637c-09ef-47f6-bce5-dcc3f4d6e167/volumes" Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.413679 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec60191-c8b7-4d7a-a69f-765a9652878b" path="/var/lib/kubelet/pods/aec60191-c8b7-4d7a-a69f-765a9652878b/volumes" Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.414304 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed8e6fd4-c884-4a5d-8189-3929beafa311" path="/var/lib/kubelet/pods/ed8e6fd4-c884-4a5d-8189-3929beafa311/volumes" Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.621551 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" event={"ID":"260f1ea9-6ba5-40aa-ab56-e95237cb1009","Type":"ContainerStarted","Data":"a683476bd8aa939b00c339db91216a1956614d78f5849fe148f48cb8ff8b0d51"} Jan 30 14:18:14 crc kubenswrapper[4793]: I0130 14:18:14.642289 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" podStartSLOduration=2.773588899 podStartE2EDuration="3.642272396s" podCreationTimestamp="2026-01-30 14:18:11 +0000 UTC" firstStartedPulling="2026-01-30 14:18:12.526918599 +0000 UTC m=+2103.228267090" lastFinishedPulling="2026-01-30 14:18:13.395602096 +0000 UTC m=+2104.096950587" observedRunningTime="2026-01-30 14:18:14.636006125 +0000 UTC m=+2105.337354626" watchObservedRunningTime="2026-01-30 14:18:14.642272396 +0000 UTC m=+2105.343620887" Jan 30 14:18:20 crc kubenswrapper[4793]: I0130 14:18:20.345479 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:18:20 crc kubenswrapper[4793]: I0130 14:18:20.413268 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:18:20 crc kubenswrapper[4793]: I0130 14:18:20.580468 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lb62l"] Jan 30 14:18:21 crc kubenswrapper[4793]: I0130 14:18:21.698879 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lb62l" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="registry-server" containerID="cri-o://bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff" gracePeriod=2 Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.188936 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.351215 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-utilities\") pod \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.351458 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-catalog-content\") pod \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.351566 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62742\" (UniqueName: \"kubernetes.io/projected/4d85b4c3-8b96-424c-a7f0-82257f2af0da-kube-api-access-62742\") pod \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\" (UID: \"4d85b4c3-8b96-424c-a7f0-82257f2af0da\") " Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.352184 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-utilities" (OuterVolumeSpecName: "utilities") pod "4d85b4c3-8b96-424c-a7f0-82257f2af0da" (UID: "4d85b4c3-8b96-424c-a7f0-82257f2af0da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.359933 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d85b4c3-8b96-424c-a7f0-82257f2af0da-kube-api-access-62742" (OuterVolumeSpecName: "kube-api-access-62742") pod "4d85b4c3-8b96-424c-a7f0-82257f2af0da" (UID: "4d85b4c3-8b96-424c-a7f0-82257f2af0da"). InnerVolumeSpecName "kube-api-access-62742". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.454298 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62742\" (UniqueName: \"kubernetes.io/projected/4d85b4c3-8b96-424c-a7f0-82257f2af0da-kube-api-access-62742\") on node \"crc\" DevicePath \"\"" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.454330 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.487367 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d85b4c3-8b96-424c-a7f0-82257f2af0da" (UID: "4d85b4c3-8b96-424c-a7f0-82257f2af0da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.557530 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d85b4c3-8b96-424c-a7f0-82257f2af0da-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.709850 4793 generic.go:334] "Generic (PLEG): container finished" podID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerID="bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff" exitCode=0 Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.710992 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb62l" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.711039 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb62l" event={"ID":"4d85b4c3-8b96-424c-a7f0-82257f2af0da","Type":"ContainerDied","Data":"bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff"} Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.711614 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb62l" event={"ID":"4d85b4c3-8b96-424c-a7f0-82257f2af0da","Type":"ContainerDied","Data":"ee99dc24d6773b1ef81ef15f8abc22453a691035e3bb9cf3a583bb3c23f8c1e4"} Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.711642 4793 scope.go:117] "RemoveContainer" containerID="bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.748543 4793 scope.go:117] "RemoveContainer" containerID="9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.749644 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lb62l"] Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.758153 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lb62l"] Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.869917 4793 scope.go:117] "RemoveContainer" containerID="e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.908185 4793 scope.go:117] "RemoveContainer" containerID="bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff" Jan 30 14:18:22 crc kubenswrapper[4793]: E0130 14:18:22.908687 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff\": container with ID starting with bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff not found: ID does not exist" containerID="bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.908819 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff"} err="failed to get container status \"bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff\": rpc error: code = NotFound desc = could not find container \"bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff\": container with ID starting with bba6492c0e2d757fc769276e68a451783356614be865effb877e81512cff5fff not found: ID does not exist" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.908926 4793 scope.go:117] "RemoveContainer" containerID="9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9" Jan 30 14:18:22 crc kubenswrapper[4793]: E0130 14:18:22.909375 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9\": container with ID starting with 9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9 not found: ID does not exist" containerID="9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.909406 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9"} err="failed to get container status \"9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9\": rpc error: code = NotFound desc = could not find container \"9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9\": container with ID starting with 9a5eae7ae2ce2dc513b3efd4c15eb779286d4c9887af8912512b51c645c28ce9 not found: ID does not exist" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.909427 4793 scope.go:117] "RemoveContainer" containerID="e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5" Jan 30 14:18:22 crc kubenswrapper[4793]: E0130 14:18:22.909764 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5\": container with ID starting with e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5 not found: ID does not exist" containerID="e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5" Jan 30 14:18:22 crc kubenswrapper[4793]: I0130 14:18:22.909868 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5"} err="failed to get container status \"e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5\": rpc error: code = NotFound desc = could not find container \"e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5\": container with ID starting with e0906f8bf14523777d9af53506e1073d7f9d07d0443c0e75c36164b407c5b2a5 not found: ID does not exist" Jan 30 14:18:24 crc kubenswrapper[4793]: I0130 14:18:24.419811 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" path="/var/lib/kubelet/pods/4d85b4c3-8b96-424c-a7f0-82257f2af0da/volumes" Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.414070 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.414560 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.414598 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.415288 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7109bad76c4800462c715a31fed08fa68ade41549aa0ee47196c92cb6ec6f9c"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.415339 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://c7109bad76c4800462c715a31fed08fa68ade41549aa0ee47196c92cb6ec6f9c" gracePeriod=600 Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.936619 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="c7109bad76c4800462c715a31fed08fa68ade41549aa0ee47196c92cb6ec6f9c" exitCode=0 Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.936670 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"c7109bad76c4800462c715a31fed08fa68ade41549aa0ee47196c92cb6ec6f9c"} Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.937342 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19"} Jan 30 14:18:42 crc kubenswrapper[4793]: I0130 14:18:42.937413 4793 scope.go:117] "RemoveContainer" containerID="a62da2e76d188a3040a5324f9f56a83e5509afc47a92d207ea4e82e85ed2de70" Jan 30 14:18:50 crc kubenswrapper[4793]: I0130 14:18:50.184384 4793 scope.go:117] "RemoveContainer" containerID="28e59e6d294030a165a0e0fc52790f5c8159b9e2c9ea4959f3f53fbe499b4fb9" Jan 30 14:18:50 crc kubenswrapper[4793]: I0130 14:18:50.216778 4793 scope.go:117] "RemoveContainer" containerID="133cf9e3114502e1ed2ef3647567a9a7de600e92d2628121b7ac9be1e2e984c3" Jan 30 14:18:50 crc kubenswrapper[4793]: I0130 14:18:50.272210 4793 scope.go:117] "RemoveContainer" containerID="3016aa7ef767c45f0d4890b13b4c41ef50790ae3c4b545cc67b0d6c6e822f10c" Jan 30 14:18:50 crc kubenswrapper[4793]: I0130 14:18:50.320760 4793 scope.go:117] "RemoveContainer" containerID="2cde16956ce50cc3200c2a37b29cfb6df4e189b94634b0673b55f35da9470b1a" Jan 30 14:18:50 crc kubenswrapper[4793]: I0130 14:18:50.390864 4793 scope.go:117] "RemoveContainer" containerID="8dcf35a2124b97e38202260bc4331118f9488517abad0d7a3392779f07bd54b6" Jan 30 14:18:50 crc kubenswrapper[4793]: I0130 14:18:50.435815 4793 scope.go:117] "RemoveContainer" containerID="de572dff5d2f58a1803be7f7064305ab032e127eb6c4e1ab6668a1723190ad57" Jan 30 14:19:12 crc kubenswrapper[4793]: I0130 14:19:12.052757 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w8lcj"] Jan 30 14:19:12 crc kubenswrapper[4793]: I0130 14:19:12.061485 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w8lcj"] Jan 30 14:19:12 crc kubenswrapper[4793]: I0130 14:19:12.416935 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ba071cd-0f26-432d-809e-709cad1a1e64" path="/var/lib/kubelet/pods/4ba071cd-0f26-432d-809e-709cad1a1e64/volumes" Jan 30 14:19:35 crc kubenswrapper[4793]: I0130 14:19:35.037929 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-75k58"] Jan 30 14:19:35 crc kubenswrapper[4793]: I0130 14:19:35.046016 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-75k58"] Jan 30 14:19:36 crc kubenswrapper[4793]: I0130 14:19:36.409445 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebcc9239-aedb-41d4-bac8-d03c56c76f4a" path="/var/lib/kubelet/pods/ebcc9239-aedb-41d4-bac8-d03c56c76f4a/volumes" Jan 30 14:19:38 crc kubenswrapper[4793]: I0130 14:19:38.423026 4793 generic.go:334] "Generic (PLEG): container finished" podID="260f1ea9-6ba5-40aa-ab56-e95237cb1009" containerID="a683476bd8aa939b00c339db91216a1956614d78f5849fe148f48cb8ff8b0d51" exitCode=0 Jan 30 14:19:38 crc kubenswrapper[4793]: I0130 14:19:38.423135 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" event={"ID":"260f1ea9-6ba5-40aa-ab56-e95237cb1009","Type":"ContainerDied","Data":"a683476bd8aa939b00c339db91216a1956614d78f5849fe148f48cb8ff8b0d51"} Jan 30 14:19:39 crc kubenswrapper[4793]: I0130 14:19:39.030147 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ml6ks"] Jan 30 14:19:39 crc kubenswrapper[4793]: I0130 14:19:39.039777 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ml6ks"] Jan 30 14:19:39 crc kubenswrapper[4793]: I0130 14:19:39.857847 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.017132 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7dcb\" (UniqueName: \"kubernetes.io/projected/260f1ea9-6ba5-40aa-ab56-e95237cb1009-kube-api-access-v7dcb\") pod \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.017170 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-ssh-key-openstack-edpm-ipam\") pod \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.017267 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-inventory\") pod \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\" (UID: \"260f1ea9-6ba5-40aa-ab56-e95237cb1009\") " Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.026345 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/260f1ea9-6ba5-40aa-ab56-e95237cb1009-kube-api-access-v7dcb" (OuterVolumeSpecName: "kube-api-access-v7dcb") pod "260f1ea9-6ba5-40aa-ab56-e95237cb1009" (UID: "260f1ea9-6ba5-40aa-ab56-e95237cb1009"). InnerVolumeSpecName "kube-api-access-v7dcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.046386 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-inventory" (OuterVolumeSpecName: "inventory") pod "260f1ea9-6ba5-40aa-ab56-e95237cb1009" (UID: "260f1ea9-6ba5-40aa-ab56-e95237cb1009"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.052633 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "260f1ea9-6ba5-40aa-ab56-e95237cb1009" (UID: "260f1ea9-6ba5-40aa-ab56-e95237cb1009"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.120658 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7dcb\" (UniqueName: \"kubernetes.io/projected/260f1ea9-6ba5-40aa-ab56-e95237cb1009-kube-api-access-v7dcb\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.120696 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.120707 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/260f1ea9-6ba5-40aa-ab56-e95237cb1009-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.429145 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45bc0c92-8817-447f-a591-d593d49d1b22" path="/var/lib/kubelet/pods/45bc0c92-8817-447f-a591-d593d49d1b22/volumes" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.446700 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" event={"ID":"260f1ea9-6ba5-40aa-ab56-e95237cb1009","Type":"ContainerDied","Data":"bcd3b8c67e7c3da4fa975f67cb3075ff012ce7cd853f89c9542d544b042c3436"} Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.446748 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcd3b8c67e7c3da4fa975f67cb3075ff012ce7cd853f89c9542d544b042c3436" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.446841 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc" Jan 30 14:19:40 crc kubenswrapper[4793]: E0130 14:19:40.484696 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod260f1ea9_6ba5_40aa_ab56_e95237cb1009.slice/crio-bcd3b8c67e7c3da4fa975f67cb3075ff012ce7cd853f89c9542d544b042c3436\": RecentStats: unable to find data in memory cache]" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.543398 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt"] Jan 30 14:19:40 crc kubenswrapper[4793]: E0130 14:19:40.543959 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="extract-utilities" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.543984 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="extract-utilities" Jan 30 14:19:40 crc kubenswrapper[4793]: E0130 14:19:40.544020 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="extract-content" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.544028 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="extract-content" Jan 30 14:19:40 crc kubenswrapper[4793]: E0130 14:19:40.544042 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="260f1ea9-6ba5-40aa-ab56-e95237cb1009" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.544623 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="260f1ea9-6ba5-40aa-ab56-e95237cb1009" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 14:19:40 crc kubenswrapper[4793]: E0130 14:19:40.544643 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="registry-server" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.544649 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="registry-server" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.544828 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="260f1ea9-6ba5-40aa-ab56-e95237cb1009" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.544850 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d85b4c3-8b96-424c-a7f0-82257f2af0da" containerName="registry-server" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.545541 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.551406 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.551543 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.551673 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.552094 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.557540 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt"] Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.632700 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.632784 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwbpg\" (UniqueName: \"kubernetes.io/projected/dcc6f491-d722-48e4-bcb8-8a9de7603786-kube-api-access-dwbpg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.632872 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.733919 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.734040 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwbpg\" (UniqueName: \"kubernetes.io/projected/dcc6f491-d722-48e4-bcb8-8a9de7603786-kube-api-access-dwbpg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.734777 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.738122 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.738456 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.752342 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwbpg\" (UniqueName: \"kubernetes.io/projected/dcc6f491-d722-48e4-bcb8-8a9de7603786-kube-api-access-dwbpg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:40 crc kubenswrapper[4793]: I0130 14:19:40.869568 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:41 crc kubenswrapper[4793]: I0130 14:19:41.368812 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt"] Jan 30 14:19:41 crc kubenswrapper[4793]: I0130 14:19:41.454933 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" event={"ID":"dcc6f491-d722-48e4-bcb8-8a9de7603786","Type":"ContainerStarted","Data":"b30b161b2c886673222efbf4812da71581156b85df480b4917abb89388fa0ed3"} Jan 30 14:19:43 crc kubenswrapper[4793]: I0130 14:19:43.475403 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" event={"ID":"dcc6f491-d722-48e4-bcb8-8a9de7603786","Type":"ContainerStarted","Data":"0d34f2957d2ad401e219ae0354f20a2ece09cdf58a83fa508fad82e05c0cdbeb"} Jan 30 14:19:43 crc kubenswrapper[4793]: I0130 14:19:43.495616 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" podStartSLOduration=2.579955269 podStartE2EDuration="3.495594061s" podCreationTimestamp="2026-01-30 14:19:40 +0000 UTC" firstStartedPulling="2026-01-30 14:19:41.374147227 +0000 UTC m=+2192.075495718" lastFinishedPulling="2026-01-30 14:19:42.289786019 +0000 UTC m=+2192.991134510" observedRunningTime="2026-01-30 14:19:43.492280661 +0000 UTC m=+2194.193629162" watchObservedRunningTime="2026-01-30 14:19:43.495594061 +0000 UTC m=+2194.196942552" Jan 30 14:19:47 crc kubenswrapper[4793]: I0130 14:19:47.519752 4793 generic.go:334] "Generic (PLEG): container finished" podID="dcc6f491-d722-48e4-bcb8-8a9de7603786" containerID="0d34f2957d2ad401e219ae0354f20a2ece09cdf58a83fa508fad82e05c0cdbeb" exitCode=0 Jan 30 14:19:47 crc kubenswrapper[4793]: I0130 14:19:47.520249 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" event={"ID":"dcc6f491-d722-48e4-bcb8-8a9de7603786","Type":"ContainerDied","Data":"0d34f2957d2ad401e219ae0354f20a2ece09cdf58a83fa508fad82e05c0cdbeb"} Jan 30 14:19:48 crc kubenswrapper[4793]: I0130 14:19:48.997193 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.126163 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-ssh-key-openstack-edpm-ipam\") pod \"dcc6f491-d722-48e4-bcb8-8a9de7603786\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.126306 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-inventory\") pod \"dcc6f491-d722-48e4-bcb8-8a9de7603786\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.126438 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwbpg\" (UniqueName: \"kubernetes.io/projected/dcc6f491-d722-48e4-bcb8-8a9de7603786-kube-api-access-dwbpg\") pod \"dcc6f491-d722-48e4-bcb8-8a9de7603786\" (UID: \"dcc6f491-d722-48e4-bcb8-8a9de7603786\") " Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.134463 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcc6f491-d722-48e4-bcb8-8a9de7603786-kube-api-access-dwbpg" (OuterVolumeSpecName: "kube-api-access-dwbpg") pod "dcc6f491-d722-48e4-bcb8-8a9de7603786" (UID: "dcc6f491-d722-48e4-bcb8-8a9de7603786"). InnerVolumeSpecName "kube-api-access-dwbpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.156906 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-inventory" (OuterVolumeSpecName: "inventory") pod "dcc6f491-d722-48e4-bcb8-8a9de7603786" (UID: "dcc6f491-d722-48e4-bcb8-8a9de7603786"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.159833 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dcc6f491-d722-48e4-bcb8-8a9de7603786" (UID: "dcc6f491-d722-48e4-bcb8-8a9de7603786"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.229065 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.229108 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwbpg\" (UniqueName: \"kubernetes.io/projected/dcc6f491-d722-48e4-bcb8-8a9de7603786-kube-api-access-dwbpg\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.229124 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dcc6f491-d722-48e4-bcb8-8a9de7603786-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.539472 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" event={"ID":"dcc6f491-d722-48e4-bcb8-8a9de7603786","Type":"ContainerDied","Data":"b30b161b2c886673222efbf4812da71581156b85df480b4917abb89388fa0ed3"} Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.539517 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b30b161b2c886673222efbf4812da71581156b85df480b4917abb89388fa0ed3" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.539524 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.679225 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr"] Jan 30 14:19:49 crc kubenswrapper[4793]: E0130 14:19:49.679717 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc6f491-d722-48e4-bcb8-8a9de7603786" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.679742 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc6f491-d722-48e4-bcb8-8a9de7603786" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.679925 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcc6f491-d722-48e4-bcb8-8a9de7603786" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.680570 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.685658 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.685908 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.686200 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.689801 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr"] Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.690036 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.840604 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.840672 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk8b5\" (UniqueName: \"kubernetes.io/projected/1ee9c552-088f-4e61-961e-7062bf6e874b-kube-api-access-rk8b5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.840803 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.942328 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.942383 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk8b5\" (UniqueName: \"kubernetes.io/projected/1ee9c552-088f-4e61-961e-7062bf6e874b-kube-api-access-rk8b5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.942421 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.947683 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.948632 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:49 crc kubenswrapper[4793]: I0130 14:19:49.964425 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk8b5\" (UniqueName: \"kubernetes.io/projected/1ee9c552-088f-4e61-961e-7062bf6e874b-kube-api-access-rk8b5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lqrxr\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:50 crc kubenswrapper[4793]: I0130 14:19:50.007176 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:19:50 crc kubenswrapper[4793]: I0130 14:19:50.516236 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr"] Jan 30 14:19:50 crc kubenswrapper[4793]: I0130 14:19:50.554178 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" event={"ID":"1ee9c552-088f-4e61-961e-7062bf6e874b","Type":"ContainerStarted","Data":"198e531d99fd0bd9e1dbdbead68ffefc142e56214e16f99f17371a7795b85dcf"} Jan 30 14:19:50 crc kubenswrapper[4793]: I0130 14:19:50.616698 4793 scope.go:117] "RemoveContainer" containerID="90b9675474db2f014b16f6ff676632a8fb2215b39c16f9464ddb8818d9838269" Jan 30 14:19:50 crc kubenswrapper[4793]: I0130 14:19:50.662713 4793 scope.go:117] "RemoveContainer" containerID="c3407efb2fdb58b554465a66ada59f330d66ff60faa105c9e72328442584be37" Jan 30 14:19:50 crc kubenswrapper[4793]: I0130 14:19:50.708986 4793 scope.go:117] "RemoveContainer" containerID="d5dca6794b88409e9b00ca4874a836a8fc72adc63350f5d3d74d780410a0a920" Jan 30 14:19:51 crc kubenswrapper[4793]: I0130 14:19:51.573806 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" event={"ID":"1ee9c552-088f-4e61-961e-7062bf6e874b","Type":"ContainerStarted","Data":"caeb3293818ec051ac12e0602b0d244314fd25439754a9c03c0a1727737001ef"} Jan 30 14:19:51 crc kubenswrapper[4793]: I0130 14:19:51.600668 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" podStartSLOduration=2.109043182 podStartE2EDuration="2.600647775s" podCreationTimestamp="2026-01-30 14:19:49 +0000 UTC" firstStartedPulling="2026-01-30 14:19:50.524677089 +0000 UTC m=+2201.226025580" lastFinishedPulling="2026-01-30 14:19:51.016281682 +0000 UTC m=+2201.717630173" observedRunningTime="2026-01-30 14:19:51.5926418 +0000 UTC m=+2202.293990291" watchObservedRunningTime="2026-01-30 14:19:51.600647775 +0000 UTC m=+2202.301996266" Jan 30 14:20:22 crc kubenswrapper[4793]: I0130 14:20:22.053787 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-mrwzs"] Jan 30 14:20:22 crc kubenswrapper[4793]: I0130 14:20:22.061796 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-mrwzs"] Jan 30 14:20:22 crc kubenswrapper[4793]: I0130 14:20:22.410037 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33ed75d8-77f2-4c4d-b725-b703b8ce2980" path="/var/lib/kubelet/pods/33ed75d8-77f2-4c4d-b725-b703b8ce2980/volumes" Jan 30 14:20:31 crc kubenswrapper[4793]: I0130 14:20:31.969272 4793 generic.go:334] "Generic (PLEG): container finished" podID="1ee9c552-088f-4e61-961e-7062bf6e874b" containerID="caeb3293818ec051ac12e0602b0d244314fd25439754a9c03c0a1727737001ef" exitCode=0 Jan 30 14:20:31 crc kubenswrapper[4793]: I0130 14:20:31.969354 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" event={"ID":"1ee9c552-088f-4e61-961e-7062bf6e874b","Type":"ContainerDied","Data":"caeb3293818ec051ac12e0602b0d244314fd25439754a9c03c0a1727737001ef"} Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.427808 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.534964 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-ssh-key-openstack-edpm-ipam\") pod \"1ee9c552-088f-4e61-961e-7062bf6e874b\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.535492 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk8b5\" (UniqueName: \"kubernetes.io/projected/1ee9c552-088f-4e61-961e-7062bf6e874b-kube-api-access-rk8b5\") pod \"1ee9c552-088f-4e61-961e-7062bf6e874b\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.535686 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-inventory\") pod \"1ee9c552-088f-4e61-961e-7062bf6e874b\" (UID: \"1ee9c552-088f-4e61-961e-7062bf6e874b\") " Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.546417 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ee9c552-088f-4e61-961e-7062bf6e874b-kube-api-access-rk8b5" (OuterVolumeSpecName: "kube-api-access-rk8b5") pod "1ee9c552-088f-4e61-961e-7062bf6e874b" (UID: "1ee9c552-088f-4e61-961e-7062bf6e874b"). InnerVolumeSpecName "kube-api-access-rk8b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.570496 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-inventory" (OuterVolumeSpecName: "inventory") pod "1ee9c552-088f-4e61-961e-7062bf6e874b" (UID: "1ee9c552-088f-4e61-961e-7062bf6e874b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.570795 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1ee9c552-088f-4e61-961e-7062bf6e874b" (UID: "1ee9c552-088f-4e61-961e-7062bf6e874b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.639244 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk8b5\" (UniqueName: \"kubernetes.io/projected/1ee9c552-088f-4e61-961e-7062bf6e874b-kube-api-access-rk8b5\") on node \"crc\" DevicePath \"\"" Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.639283 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.639293 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee9c552-088f-4e61-961e-7062bf6e874b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.987498 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" event={"ID":"1ee9c552-088f-4e61-961e-7062bf6e874b","Type":"ContainerDied","Data":"198e531d99fd0bd9e1dbdbead68ffefc142e56214e16f99f17371a7795b85dcf"} Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.987534 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="198e531d99fd0bd9e1dbdbead68ffefc142e56214e16f99f17371a7795b85dcf" Jan 30 14:20:33 crc kubenswrapper[4793]: I0130 14:20:33.987582 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lqrxr" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.076977 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2"] Jan 30 14:20:34 crc kubenswrapper[4793]: E0130 14:20:34.077393 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee9c552-088f-4e61-961e-7062bf6e874b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.077420 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee9c552-088f-4e61-961e-7062bf6e874b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.077673 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee9c552-088f-4e61-961e-7062bf6e874b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.078363 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.081291 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.081493 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.081702 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.082447 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.095645 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2"] Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.249901 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.250036 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.250260 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkb6d\" (UniqueName: \"kubernetes.io/projected/44f4e8fd-4511-4670-944a-e37dfc6238c8-kube-api-access-kkb6d\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.352535 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkb6d\" (UniqueName: \"kubernetes.io/projected/44f4e8fd-4511-4670-944a-e37dfc6238c8-kube-api-access-kkb6d\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.353000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.353252 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.357493 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.361750 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.380489 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkb6d\" (UniqueName: \"kubernetes.io/projected/44f4e8fd-4511-4670-944a-e37dfc6238c8-kube-api-access-kkb6d\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jchk2\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:34 crc kubenswrapper[4793]: I0130 14:20:34.395855 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:20:35 crc kubenswrapper[4793]: I0130 14:20:35.129815 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2"] Jan 30 14:20:36 crc kubenswrapper[4793]: I0130 14:20:36.003794 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" event={"ID":"44f4e8fd-4511-4670-944a-e37dfc6238c8","Type":"ContainerStarted","Data":"a9015e79c329eb72d41d603b294a22ae5d93178d8d2d64cf54528b6f45b377bf"} Jan 30 14:20:36 crc kubenswrapper[4793]: I0130 14:20:36.004159 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" event={"ID":"44f4e8fd-4511-4670-944a-e37dfc6238c8","Type":"ContainerStarted","Data":"fad95305628b0bb9ff4fbb99102a672ed83873978699983c18378fffedce3842"} Jan 30 14:20:36 crc kubenswrapper[4793]: I0130 14:20:36.028348 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" podStartSLOduration=1.648952768 podStartE2EDuration="2.028321338s" podCreationTimestamp="2026-01-30 14:20:34 +0000 UTC" firstStartedPulling="2026-01-30 14:20:35.141240431 +0000 UTC m=+2245.842588922" lastFinishedPulling="2026-01-30 14:20:35.520609001 +0000 UTC m=+2246.221957492" observedRunningTime="2026-01-30 14:20:36.015925425 +0000 UTC m=+2246.717273926" watchObservedRunningTime="2026-01-30 14:20:36.028321338 +0000 UTC m=+2246.729669859" Jan 30 14:20:42 crc kubenswrapper[4793]: I0130 14:20:42.413934 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:20:42 crc kubenswrapper[4793]: I0130 14:20:42.414505 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:20:50 crc kubenswrapper[4793]: I0130 14:20:50.810628 4793 scope.go:117] "RemoveContainer" containerID="596a656189ddb8dd9803e2c0c8dc2a8724dea1aee86c92cab0644fce8e091c80" Jan 30 14:21:12 crc kubenswrapper[4793]: I0130 14:21:12.413301 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:21:12 crc kubenswrapper[4793]: I0130 14:21:12.414269 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.691299 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7dr4h"] Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.693736 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.694951 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcvg4\" (UniqueName: \"kubernetes.io/projected/abef0532-bda8-460d-80b9-c4e44ce7f68e-kube-api-access-tcvg4\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.695293 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-catalog-content\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.695417 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-utilities\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.706551 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7dr4h"] Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.798123 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcvg4\" (UniqueName: \"kubernetes.io/projected/abef0532-bda8-460d-80b9-c4e44ce7f68e-kube-api-access-tcvg4\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.798263 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-catalog-content\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.798869 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-catalog-content\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.798954 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-utilities\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.799799 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-utilities\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:18 crc kubenswrapper[4793]: I0130 14:21:18.834066 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcvg4\" (UniqueName: \"kubernetes.io/projected/abef0532-bda8-460d-80b9-c4e44ce7f68e-kube-api-access-tcvg4\") pod \"certified-operators-7dr4h\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:19 crc kubenswrapper[4793]: I0130 14:21:19.054068 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:19 crc kubenswrapper[4793]: I0130 14:21:19.606462 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7dr4h"] Jan 30 14:21:20 crc kubenswrapper[4793]: I0130 14:21:20.357426 4793 generic.go:334] "Generic (PLEG): container finished" podID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerID="220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc" exitCode=0 Jan 30 14:21:20 crc kubenswrapper[4793]: I0130 14:21:20.357487 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dr4h" event={"ID":"abef0532-bda8-460d-80b9-c4e44ce7f68e","Type":"ContainerDied","Data":"220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc"} Jan 30 14:21:20 crc kubenswrapper[4793]: I0130 14:21:20.357718 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dr4h" event={"ID":"abef0532-bda8-460d-80b9-c4e44ce7f68e","Type":"ContainerStarted","Data":"071505cdc6018a0a16ae65f42adeffb4b74a81940f0091be45398cfd1a17cab6"} Jan 30 14:21:22 crc kubenswrapper[4793]: I0130 14:21:22.375591 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dr4h" event={"ID":"abef0532-bda8-460d-80b9-c4e44ce7f68e","Type":"ContainerStarted","Data":"dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65"} Jan 30 14:21:25 crc kubenswrapper[4793]: I0130 14:21:25.401395 4793 generic.go:334] "Generic (PLEG): container finished" podID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerID="dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65" exitCode=0 Jan 30 14:21:25 crc kubenswrapper[4793]: I0130 14:21:25.401480 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dr4h" event={"ID":"abef0532-bda8-460d-80b9-c4e44ce7f68e","Type":"ContainerDied","Data":"dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65"} Jan 30 14:21:25 crc kubenswrapper[4793]: I0130 14:21:25.404444 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:21:26 crc kubenswrapper[4793]: I0130 14:21:26.412947 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dr4h" event={"ID":"abef0532-bda8-460d-80b9-c4e44ce7f68e","Type":"ContainerStarted","Data":"7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395"} Jan 30 14:21:26 crc kubenswrapper[4793]: I0130 14:21:26.433684 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7dr4h" podStartSLOduration=2.929897624 podStartE2EDuration="8.433664339s" podCreationTimestamp="2026-01-30 14:21:18 +0000 UTC" firstStartedPulling="2026-01-30 14:21:20.359235545 +0000 UTC m=+2291.060584036" lastFinishedPulling="2026-01-30 14:21:25.86300227 +0000 UTC m=+2296.564350751" observedRunningTime="2026-01-30 14:21:26.428138984 +0000 UTC m=+2297.129487485" watchObservedRunningTime="2026-01-30 14:21:26.433664339 +0000 UTC m=+2297.135012830" Jan 30 14:21:29 crc kubenswrapper[4793]: I0130 14:21:29.054505 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:29 crc kubenswrapper[4793]: I0130 14:21:29.054844 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:29 crc kubenswrapper[4793]: I0130 14:21:29.107938 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:29 crc kubenswrapper[4793]: I0130 14:21:29.448039 4793 generic.go:334] "Generic (PLEG): container finished" podID="44f4e8fd-4511-4670-944a-e37dfc6238c8" containerID="a9015e79c329eb72d41d603b294a22ae5d93178d8d2d64cf54528b6f45b377bf" exitCode=0 Jan 30 14:21:29 crc kubenswrapper[4793]: I0130 14:21:29.448102 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" event={"ID":"44f4e8fd-4511-4670-944a-e37dfc6238c8","Type":"ContainerDied","Data":"a9015e79c329eb72d41d603b294a22ae5d93178d8d2d64cf54528b6f45b377bf"} Jan 30 14:21:30 crc kubenswrapper[4793]: I0130 14:21:30.964418 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.123521 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-inventory\") pod \"44f4e8fd-4511-4670-944a-e37dfc6238c8\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.123895 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-ssh-key-openstack-edpm-ipam\") pod \"44f4e8fd-4511-4670-944a-e37dfc6238c8\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.124022 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkb6d\" (UniqueName: \"kubernetes.io/projected/44f4e8fd-4511-4670-944a-e37dfc6238c8-kube-api-access-kkb6d\") pod \"44f4e8fd-4511-4670-944a-e37dfc6238c8\" (UID: \"44f4e8fd-4511-4670-944a-e37dfc6238c8\") " Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.130171 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44f4e8fd-4511-4670-944a-e37dfc6238c8-kube-api-access-kkb6d" (OuterVolumeSpecName: "kube-api-access-kkb6d") pod "44f4e8fd-4511-4670-944a-e37dfc6238c8" (UID: "44f4e8fd-4511-4670-944a-e37dfc6238c8"). InnerVolumeSpecName "kube-api-access-kkb6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.151699 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "44f4e8fd-4511-4670-944a-e37dfc6238c8" (UID: "44f4e8fd-4511-4670-944a-e37dfc6238c8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.164884 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-inventory" (OuterVolumeSpecName: "inventory") pod "44f4e8fd-4511-4670-944a-e37dfc6238c8" (UID: "44f4e8fd-4511-4670-944a-e37dfc6238c8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.226846 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.226899 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44f4e8fd-4511-4670-944a-e37dfc6238c8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.226916 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkb6d\" (UniqueName: \"kubernetes.io/projected/44f4e8fd-4511-4670-944a-e37dfc6238c8-kube-api-access-kkb6d\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.463739 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" event={"ID":"44f4e8fd-4511-4670-944a-e37dfc6238c8","Type":"ContainerDied","Data":"fad95305628b0bb9ff4fbb99102a672ed83873978699983c18378fffedce3842"} Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.463779 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fad95305628b0bb9ff4fbb99102a672ed83873978699983c18378fffedce3842" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.463864 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jchk2" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.584259 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nlncv"] Jan 30 14:21:31 crc kubenswrapper[4793]: E0130 14:21:31.588487 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44f4e8fd-4511-4670-944a-e37dfc6238c8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.588599 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="44f4e8fd-4511-4670-944a-e37dfc6238c8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.589003 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="44f4e8fd-4511-4670-944a-e37dfc6238c8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.590003 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.600245 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.600610 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.600695 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.601172 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.616471 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nlncv"] Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.736302 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.736386 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.736449 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s29z\" (UniqueName: \"kubernetes.io/projected/3cad1dbc-effe-48d8-af45-df0a45e16783-kube-api-access-2s29z\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.838001 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.838089 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.838119 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s29z\" (UniqueName: \"kubernetes.io/projected/3cad1dbc-effe-48d8-af45-df0a45e16783-kube-api-access-2s29z\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.853205 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.853476 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.854663 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s29z\" (UniqueName: \"kubernetes.io/projected/3cad1dbc-effe-48d8-af45-df0a45e16783-kube-api-access-2s29z\") pod \"ssh-known-hosts-edpm-deployment-nlncv\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:31 crc kubenswrapper[4793]: I0130 14:21:31.917126 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:32 crc kubenswrapper[4793]: I0130 14:21:32.438530 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nlncv"] Jan 30 14:21:32 crc kubenswrapper[4793]: I0130 14:21:32.474029 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" event={"ID":"3cad1dbc-effe-48d8-af45-df0a45e16783","Type":"ContainerStarted","Data":"07909ff107f4055891d6e17429bccfc51538043329feda79f63c9ffa07efd7fc"} Jan 30 14:21:33 crc kubenswrapper[4793]: I0130 14:21:33.486328 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" event={"ID":"3cad1dbc-effe-48d8-af45-df0a45e16783","Type":"ContainerStarted","Data":"cb9a5c92d49ff68631aafe317707ea0d2062de92795fb0e86959969982b5b945"} Jan 30 14:21:33 crc kubenswrapper[4793]: I0130 14:21:33.510003 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" podStartSLOduration=1.857994865 podStartE2EDuration="2.509976596s" podCreationTimestamp="2026-01-30 14:21:31 +0000 UTC" firstStartedPulling="2026-01-30 14:21:32.445329455 +0000 UTC m=+2303.146677956" lastFinishedPulling="2026-01-30 14:21:33.097311196 +0000 UTC m=+2303.798659687" observedRunningTime="2026-01-30 14:21:33.503257042 +0000 UTC m=+2304.204605533" watchObservedRunningTime="2026-01-30 14:21:33.509976596 +0000 UTC m=+2304.211325087" Jan 30 14:21:39 crc kubenswrapper[4793]: I0130 14:21:39.106983 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:39 crc kubenswrapper[4793]: I0130 14:21:39.823935 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7dr4h"] Jan 30 14:21:39 crc kubenswrapper[4793]: I0130 14:21:39.824487 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7dr4h" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerName="registry-server" containerID="cri-o://7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395" gracePeriod=2 Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.480435 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.554149 4793 generic.go:334] "Generic (PLEG): container finished" podID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerID="7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395" exitCode=0 Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.554186 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dr4h" event={"ID":"abef0532-bda8-460d-80b9-c4e44ce7f68e","Type":"ContainerDied","Data":"7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395"} Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.554210 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dr4h" event={"ID":"abef0532-bda8-460d-80b9-c4e44ce7f68e","Type":"ContainerDied","Data":"071505cdc6018a0a16ae65f42adeffb4b74a81940f0091be45398cfd1a17cab6"} Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.554226 4793 scope.go:117] "RemoveContainer" containerID="7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.554349 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dr4h" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.579147 4793 scope.go:117] "RemoveContainer" containerID="dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.604255 4793 scope.go:117] "RemoveContainer" containerID="220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.621147 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-catalog-content\") pod \"abef0532-bda8-460d-80b9-c4e44ce7f68e\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.621297 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcvg4\" (UniqueName: \"kubernetes.io/projected/abef0532-bda8-460d-80b9-c4e44ce7f68e-kube-api-access-tcvg4\") pod \"abef0532-bda8-460d-80b9-c4e44ce7f68e\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.621447 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-utilities\") pod \"abef0532-bda8-460d-80b9-c4e44ce7f68e\" (UID: \"abef0532-bda8-460d-80b9-c4e44ce7f68e\") " Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.622444 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-utilities" (OuterVolumeSpecName: "utilities") pod "abef0532-bda8-460d-80b9-c4e44ce7f68e" (UID: "abef0532-bda8-460d-80b9-c4e44ce7f68e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.627871 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abef0532-bda8-460d-80b9-c4e44ce7f68e-kube-api-access-tcvg4" (OuterVolumeSpecName: "kube-api-access-tcvg4") pod "abef0532-bda8-460d-80b9-c4e44ce7f68e" (UID: "abef0532-bda8-460d-80b9-c4e44ce7f68e"). InnerVolumeSpecName "kube-api-access-tcvg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.677854 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abef0532-bda8-460d-80b9-c4e44ce7f68e" (UID: "abef0532-bda8-460d-80b9-c4e44ce7f68e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.692983 4793 scope.go:117] "RemoveContainer" containerID="7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395" Jan 30 14:21:40 crc kubenswrapper[4793]: E0130 14:21:40.693754 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395\": container with ID starting with 7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395 not found: ID does not exist" containerID="7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.693784 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395"} err="failed to get container status \"7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395\": rpc error: code = NotFound desc = could not find container \"7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395\": container with ID starting with 7668a95f34853f33935dd432f27663bd07ae8a431be3686e99eb4a57be8af395 not found: ID does not exist" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.693804 4793 scope.go:117] "RemoveContainer" containerID="dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65" Jan 30 14:21:40 crc kubenswrapper[4793]: E0130 14:21:40.694098 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65\": container with ID starting with dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65 not found: ID does not exist" containerID="dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.694130 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65"} err="failed to get container status \"dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65\": rpc error: code = NotFound desc = could not find container \"dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65\": container with ID starting with dd8dd79d6200756ad48b3b87b388f00760e03ae32a191cae4b686a0f114bff65 not found: ID does not exist" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.694144 4793 scope.go:117] "RemoveContainer" containerID="220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc" Jan 30 14:21:40 crc kubenswrapper[4793]: E0130 14:21:40.694627 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc\": container with ID starting with 220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc not found: ID does not exist" containerID="220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.694694 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc"} err="failed to get container status \"220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc\": rpc error: code = NotFound desc = could not find container \"220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc\": container with ID starting with 220565c311b8734c0e0aa83cd7c45cd4e7a3fa7ed0d7ce68ab71675287c76bcc not found: ID does not exist" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.723234 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.723446 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abef0532-bda8-460d-80b9-c4e44ce7f68e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.723507 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcvg4\" (UniqueName: \"kubernetes.io/projected/abef0532-bda8-460d-80b9-c4e44ce7f68e-kube-api-access-tcvg4\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.888313 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7dr4h"] Jan 30 14:21:40 crc kubenswrapper[4793]: I0130 14:21:40.896551 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7dr4h"] Jan 30 14:21:41 crc kubenswrapper[4793]: I0130 14:21:41.563623 4793 generic.go:334] "Generic (PLEG): container finished" podID="3cad1dbc-effe-48d8-af45-df0a45e16783" containerID="cb9a5c92d49ff68631aafe317707ea0d2062de92795fb0e86959969982b5b945" exitCode=0 Jan 30 14:21:41 crc kubenswrapper[4793]: I0130 14:21:41.563926 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" event={"ID":"3cad1dbc-effe-48d8-af45-df0a45e16783","Type":"ContainerDied","Data":"cb9a5c92d49ff68631aafe317707ea0d2062de92795fb0e86959969982b5b945"} Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.413402 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.413458 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.414521 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" path="/var/lib/kubelet/pods/abef0532-bda8-460d-80b9-c4e44ce7f68e/volumes" Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.416271 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.417853 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.418004 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" gracePeriod=600 Jan 30 14:21:42 crc kubenswrapper[4793]: E0130 14:21:42.548144 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.584065 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" exitCode=0 Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.584268 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19"} Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.584309 4793 scope.go:117] "RemoveContainer" containerID="c7109bad76c4800462c715a31fed08fa68ade41549aa0ee47196c92cb6ec6f9c" Jan 30 14:21:42 crc kubenswrapper[4793]: I0130 14:21:42.584916 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:21:42 crc kubenswrapper[4793]: E0130 14:21:42.585231 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.069207 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.168892 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-ssh-key-openstack-edpm-ipam\") pod \"3cad1dbc-effe-48d8-af45-df0a45e16783\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.168958 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s29z\" (UniqueName: \"kubernetes.io/projected/3cad1dbc-effe-48d8-af45-df0a45e16783-kube-api-access-2s29z\") pod \"3cad1dbc-effe-48d8-af45-df0a45e16783\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.169077 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-inventory-0\") pod \"3cad1dbc-effe-48d8-af45-df0a45e16783\" (UID: \"3cad1dbc-effe-48d8-af45-df0a45e16783\") " Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.174978 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cad1dbc-effe-48d8-af45-df0a45e16783-kube-api-access-2s29z" (OuterVolumeSpecName: "kube-api-access-2s29z") pod "3cad1dbc-effe-48d8-af45-df0a45e16783" (UID: "3cad1dbc-effe-48d8-af45-df0a45e16783"). InnerVolumeSpecName "kube-api-access-2s29z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.200874 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "3cad1dbc-effe-48d8-af45-df0a45e16783" (UID: "3cad1dbc-effe-48d8-af45-df0a45e16783"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.208326 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3cad1dbc-effe-48d8-af45-df0a45e16783" (UID: "3cad1dbc-effe-48d8-af45-df0a45e16783"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.270636 4793 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.270693 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cad1dbc-effe-48d8-af45-df0a45e16783-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.270710 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s29z\" (UniqueName: \"kubernetes.io/projected/3cad1dbc-effe-48d8-af45-df0a45e16783-kube-api-access-2s29z\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.595879 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" event={"ID":"3cad1dbc-effe-48d8-af45-df0a45e16783","Type":"ContainerDied","Data":"07909ff107f4055891d6e17429bccfc51538043329feda79f63c9ffa07efd7fc"} Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.595943 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07909ff107f4055891d6e17429bccfc51538043329feda79f63c9ffa07efd7fc" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.596002 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nlncv" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.677944 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58"] Jan 30 14:21:43 crc kubenswrapper[4793]: E0130 14:21:43.678371 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerName="registry-server" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.678389 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerName="registry-server" Jan 30 14:21:43 crc kubenswrapper[4793]: E0130 14:21:43.678414 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerName="extract-content" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.678420 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerName="extract-content" Jan 30 14:21:43 crc kubenswrapper[4793]: E0130 14:21:43.678435 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cad1dbc-effe-48d8-af45-df0a45e16783" containerName="ssh-known-hosts-edpm-deployment" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.678443 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cad1dbc-effe-48d8-af45-df0a45e16783" containerName="ssh-known-hosts-edpm-deployment" Jan 30 14:21:43 crc kubenswrapper[4793]: E0130 14:21:43.678467 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerName="extract-utilities" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.678475 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerName="extract-utilities" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.678671 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="abef0532-bda8-460d-80b9-c4e44ce7f68e" containerName="registry-server" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.678690 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cad1dbc-effe-48d8-af45-df0a45e16783" containerName="ssh-known-hosts-edpm-deployment" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.679393 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.689189 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.689367 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.689427 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.694872 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.705521 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58"] Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.779553 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb45b\" (UniqueName: \"kubernetes.io/projected/7915ec77-ca16-4f23-a367-42b525c80284-kube-api-access-tb45b\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.779620 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.779648 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.881023 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb45b\" (UniqueName: \"kubernetes.io/projected/7915ec77-ca16-4f23-a367-42b525c80284-kube-api-access-tb45b\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.881115 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.881256 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.887197 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.892928 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.900587 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb45b\" (UniqueName: \"kubernetes.io/projected/7915ec77-ca16-4f23-a367-42b525c80284-kube-api-access-tb45b\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5q58\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:43 crc kubenswrapper[4793]: I0130 14:21:43.996119 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:44 crc kubenswrapper[4793]: I0130 14:21:44.716008 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58"] Jan 30 14:21:45 crc kubenswrapper[4793]: I0130 14:21:45.628247 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" event={"ID":"7915ec77-ca16-4f23-a367-42b525c80284","Type":"ContainerStarted","Data":"d4a75b71a6f08d7e1ae63d9f7e8be9b4c3fd94122dc13efb955e3a3da657f8ea"} Jan 30 14:21:45 crc kubenswrapper[4793]: I0130 14:21:45.628610 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" event={"ID":"7915ec77-ca16-4f23-a367-42b525c80284","Type":"ContainerStarted","Data":"5056c89f893c22f6d895f2db21ec550d28feaa74d141a03d37334d3db4ad6603"} Jan 30 14:21:45 crc kubenswrapper[4793]: I0130 14:21:45.649704 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" podStartSLOduration=2.204764276 podStartE2EDuration="2.649683473s" podCreationTimestamp="2026-01-30 14:21:43 +0000 UTC" firstStartedPulling="2026-01-30 14:21:44.726785143 +0000 UTC m=+2315.428133634" lastFinishedPulling="2026-01-30 14:21:45.17170434 +0000 UTC m=+2315.873052831" observedRunningTime="2026-01-30 14:21:45.648574695 +0000 UTC m=+2316.349923196" watchObservedRunningTime="2026-01-30 14:21:45.649683473 +0000 UTC m=+2316.351031964" Jan 30 14:21:53 crc kubenswrapper[4793]: I0130 14:21:53.753078 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" event={"ID":"7915ec77-ca16-4f23-a367-42b525c80284","Type":"ContainerDied","Data":"d4a75b71a6f08d7e1ae63d9f7e8be9b4c3fd94122dc13efb955e3a3da657f8ea"} Jan 30 14:21:53 crc kubenswrapper[4793]: I0130 14:21:53.753038 4793 generic.go:334] "Generic (PLEG): container finished" podID="7915ec77-ca16-4f23-a367-42b525c80284" containerID="d4a75b71a6f08d7e1ae63d9f7e8be9b4c3fd94122dc13efb955e3a3da657f8ea" exitCode=0 Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.203945 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.266253 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb45b\" (UniqueName: \"kubernetes.io/projected/7915ec77-ca16-4f23-a367-42b525c80284-kube-api-access-tb45b\") pod \"7915ec77-ca16-4f23-a367-42b525c80284\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.266335 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-inventory\") pod \"7915ec77-ca16-4f23-a367-42b525c80284\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.266371 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-ssh-key-openstack-edpm-ipam\") pod \"7915ec77-ca16-4f23-a367-42b525c80284\" (UID: \"7915ec77-ca16-4f23-a367-42b525c80284\") " Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.272203 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7915ec77-ca16-4f23-a367-42b525c80284-kube-api-access-tb45b" (OuterVolumeSpecName: "kube-api-access-tb45b") pod "7915ec77-ca16-4f23-a367-42b525c80284" (UID: "7915ec77-ca16-4f23-a367-42b525c80284"). InnerVolumeSpecName "kube-api-access-tb45b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.291641 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7915ec77-ca16-4f23-a367-42b525c80284" (UID: "7915ec77-ca16-4f23-a367-42b525c80284"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.298949 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-inventory" (OuterVolumeSpecName: "inventory") pod "7915ec77-ca16-4f23-a367-42b525c80284" (UID: "7915ec77-ca16-4f23-a367-42b525c80284"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.375267 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb45b\" (UniqueName: \"kubernetes.io/projected/7915ec77-ca16-4f23-a367-42b525c80284-kube-api-access-tb45b\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.375316 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.375332 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7915ec77-ca16-4f23-a367-42b525c80284-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.780298 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" event={"ID":"7915ec77-ca16-4f23-a367-42b525c80284","Type":"ContainerDied","Data":"5056c89f893c22f6d895f2db21ec550d28feaa74d141a03d37334d3db4ad6603"} Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.780363 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5056c89f893c22f6d895f2db21ec550d28feaa74d141a03d37334d3db4ad6603" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.780470 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5q58" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.875925 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7"] Jan 30 14:21:55 crc kubenswrapper[4793]: E0130 14:21:55.876298 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7915ec77-ca16-4f23-a367-42b525c80284" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.876314 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7915ec77-ca16-4f23-a367-42b525c80284" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.876490 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7915ec77-ca16-4f23-a367-42b525c80284" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.877115 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.881950 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.895535 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.895734 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.895884 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.928582 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7"] Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.998064 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.998164 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp2cn\" (UniqueName: \"kubernetes.io/projected/0538b501-a861-4302-b26e-f5cfb17ed62a-kube-api-access-gp2cn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:55 crc kubenswrapper[4793]: I0130 14:21:55.998425 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:56 crc kubenswrapper[4793]: I0130 14:21:56.100267 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:56 crc kubenswrapper[4793]: I0130 14:21:56.100365 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp2cn\" (UniqueName: \"kubernetes.io/projected/0538b501-a861-4302-b26e-f5cfb17ed62a-kube-api-access-gp2cn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:56 crc kubenswrapper[4793]: I0130 14:21:56.100456 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:56 crc kubenswrapper[4793]: I0130 14:21:56.106904 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:56 crc kubenswrapper[4793]: I0130 14:21:56.107112 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:56 crc kubenswrapper[4793]: I0130 14:21:56.147399 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp2cn\" (UniqueName: \"kubernetes.io/projected/0538b501-a861-4302-b26e-f5cfb17ed62a-kube-api-access-gp2cn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:56 crc kubenswrapper[4793]: I0130 14:21:56.258363 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:21:56 crc kubenswrapper[4793]: I0130 14:21:56.787646 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7"] Jan 30 14:21:57 crc kubenswrapper[4793]: I0130 14:21:57.398323 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:21:57 crc kubenswrapper[4793]: E0130 14:21:57.398836 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:21:57 crc kubenswrapper[4793]: I0130 14:21:57.796424 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" event={"ID":"0538b501-a861-4302-b26e-f5cfb17ed62a","Type":"ContainerStarted","Data":"fea5e63393f75f4b613c43ceaa8d48b3e7349e45486c106589b512deedfb7172"} Jan 30 14:21:57 crc kubenswrapper[4793]: I0130 14:21:57.796480 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" event={"ID":"0538b501-a861-4302-b26e-f5cfb17ed62a","Type":"ContainerStarted","Data":"bd05c803b7c5cfaa753e46947ba4a87a5c66eb51717cb996ce4f28515a85e28e"} Jan 30 14:21:57 crc kubenswrapper[4793]: I0130 14:21:57.830974 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" podStartSLOduration=2.396983111 podStartE2EDuration="2.830953881s" podCreationTimestamp="2026-01-30 14:21:55 +0000 UTC" firstStartedPulling="2026-01-30 14:21:56.78893083 +0000 UTC m=+2327.490279321" lastFinishedPulling="2026-01-30 14:21:57.22290158 +0000 UTC m=+2327.924250091" observedRunningTime="2026-01-30 14:21:57.828417349 +0000 UTC m=+2328.529765880" watchObservedRunningTime="2026-01-30 14:21:57.830953881 +0000 UTC m=+2328.532302372" Jan 30 14:22:07 crc kubenswrapper[4793]: I0130 14:22:07.884810 4793 generic.go:334] "Generic (PLEG): container finished" podID="0538b501-a861-4302-b26e-f5cfb17ed62a" containerID="fea5e63393f75f4b613c43ceaa8d48b3e7349e45486c106589b512deedfb7172" exitCode=0 Jan 30 14:22:07 crc kubenswrapper[4793]: I0130 14:22:07.884911 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" event={"ID":"0538b501-a861-4302-b26e-f5cfb17ed62a","Type":"ContainerDied","Data":"fea5e63393f75f4b613c43ceaa8d48b3e7349e45486c106589b512deedfb7172"} Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.297409 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.385680 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp2cn\" (UniqueName: \"kubernetes.io/projected/0538b501-a861-4302-b26e-f5cfb17ed62a-kube-api-access-gp2cn\") pod \"0538b501-a861-4302-b26e-f5cfb17ed62a\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.386107 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-ssh-key-openstack-edpm-ipam\") pod \"0538b501-a861-4302-b26e-f5cfb17ed62a\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.386410 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-inventory\") pod \"0538b501-a861-4302-b26e-f5cfb17ed62a\" (UID: \"0538b501-a861-4302-b26e-f5cfb17ed62a\") " Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.391853 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0538b501-a861-4302-b26e-f5cfb17ed62a-kube-api-access-gp2cn" (OuterVolumeSpecName: "kube-api-access-gp2cn") pod "0538b501-a861-4302-b26e-f5cfb17ed62a" (UID: "0538b501-a861-4302-b26e-f5cfb17ed62a"). InnerVolumeSpecName "kube-api-access-gp2cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.398892 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:22:09 crc kubenswrapper[4793]: E0130 14:22:09.399301 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.412955 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0538b501-a861-4302-b26e-f5cfb17ed62a" (UID: "0538b501-a861-4302-b26e-f5cfb17ed62a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.413492 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-inventory" (OuterVolumeSpecName: "inventory") pod "0538b501-a861-4302-b26e-f5cfb17ed62a" (UID: "0538b501-a861-4302-b26e-f5cfb17ed62a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.490753 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp2cn\" (UniqueName: \"kubernetes.io/projected/0538b501-a861-4302-b26e-f5cfb17ed62a-kube-api-access-gp2cn\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.490781 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.490791 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0538b501-a861-4302-b26e-f5cfb17ed62a-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.902262 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" event={"ID":"0538b501-a861-4302-b26e-f5cfb17ed62a","Type":"ContainerDied","Data":"bd05c803b7c5cfaa753e46947ba4a87a5c66eb51717cb996ce4f28515a85e28e"} Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.902579 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd05c803b7c5cfaa753e46947ba4a87a5c66eb51717cb996ce4f28515a85e28e" Jan 30 14:22:09 crc kubenswrapper[4793]: I0130 14:22:09.902320 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.011175 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp"] Jan 30 14:22:10 crc kubenswrapper[4793]: E0130 14:22:10.011886 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0538b501-a861-4302-b26e-f5cfb17ed62a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.011995 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0538b501-a861-4302-b26e-f5cfb17ed62a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.012457 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0538b501-a861-4302-b26e-f5cfb17ed62a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.013348 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.019068 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.019116 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.019131 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.019341 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.019420 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.019528 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.019889 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.020179 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.023772 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp"] Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.165675 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.165737 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.165780 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.165817 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.165841 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.165981 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.166107 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.166262 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2t4t\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-kube-api-access-d2t4t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.166318 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.166356 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.166398 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.166474 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.166524 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.166568 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.267695 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.267946 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268065 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268140 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268219 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268309 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2t4t\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-kube-api-access-d2t4t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268380 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268456 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268525 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268605 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268676 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268747 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268816 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.268918 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.272806 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.273151 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.274417 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.275831 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.277087 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.277857 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.279175 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.279659 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.281899 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.282088 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.282881 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.283113 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.285658 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.289439 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2t4t\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-kube-api-access-d2t4t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.482517 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:22:10 crc kubenswrapper[4793]: I0130 14:22:10.491094 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:11 crc kubenswrapper[4793]: I0130 14:22:11.035469 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp"] Jan 30 14:22:11 crc kubenswrapper[4793]: I0130 14:22:11.537234 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:22:11 crc kubenswrapper[4793]: I0130 14:22:11.922426 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" event={"ID":"ae4f8964-b104-43bb-8356-bb53a9635527","Type":"ContainerStarted","Data":"89c8d9f7344ea357868d402178be5ed38d7a7f8c40ac7b30aa3adfa7292331e3"} Jan 30 14:22:11 crc kubenswrapper[4793]: I0130 14:22:11.922802 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" event={"ID":"ae4f8964-b104-43bb-8356-bb53a9635527","Type":"ContainerStarted","Data":"0da6000fbf46068f349a91eb8f524a9b8122da198bebf7d03c6e4893fda58193"} Jan 30 14:22:11 crc kubenswrapper[4793]: I0130 14:22:11.958953 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" podStartSLOduration=2.467887825 podStartE2EDuration="2.958933876s" podCreationTimestamp="2026-01-30 14:22:09 +0000 UTC" firstStartedPulling="2026-01-30 14:22:11.043696303 +0000 UTC m=+2341.745044794" lastFinishedPulling="2026-01-30 14:22:11.534742354 +0000 UTC m=+2342.236090845" observedRunningTime="2026-01-30 14:22:11.94594558 +0000 UTC m=+2342.647294121" watchObservedRunningTime="2026-01-30 14:22:11.958933876 +0000 UTC m=+2342.660282387" Jan 30 14:22:24 crc kubenswrapper[4793]: I0130 14:22:24.399261 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:22:24 crc kubenswrapper[4793]: E0130 14:22:24.402062 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:22:36 crc kubenswrapper[4793]: I0130 14:22:36.398382 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:22:36 crc kubenswrapper[4793]: E0130 14:22:36.399362 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:22:50 crc kubenswrapper[4793]: I0130 14:22:50.398631 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:22:50 crc kubenswrapper[4793]: E0130 14:22:50.399765 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:22:51 crc kubenswrapper[4793]: I0130 14:22:51.249007 4793 generic.go:334] "Generic (PLEG): container finished" podID="ae4f8964-b104-43bb-8356-bb53a9635527" containerID="89c8d9f7344ea357868d402178be5ed38d7a7f8c40ac7b30aa3adfa7292331e3" exitCode=0 Jan 30 14:22:51 crc kubenswrapper[4793]: I0130 14:22:51.249200 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" event={"ID":"ae4f8964-b104-43bb-8356-bb53a9635527","Type":"ContainerDied","Data":"89c8d9f7344ea357868d402178be5ed38d7a7f8c40ac7b30aa3adfa7292331e3"} Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.654842 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.693733 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2t4t\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-kube-api-access-d2t4t\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.693790 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-bootstrap-combined-ca-bundle\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.693812 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-neutron-metadata-combined-ca-bundle\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.693893 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.693943 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-inventory\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.693978 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-libvirt-combined-ca-bundle\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.694022 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-telemetry-combined-ca-bundle\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.694066 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.694104 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ovn-combined-ca-bundle\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.694144 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ssh-key-openstack-edpm-ipam\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.694164 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-repo-setup-combined-ca-bundle\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.694195 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-nova-combined-ca-bundle\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.694243 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-ovn-default-certs-0\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.694277 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"ae4f8964-b104-43bb-8356-bb53a9635527\" (UID: \"ae4f8964-b104-43bb-8356-bb53a9635527\") " Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.705007 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-kube-api-access-d2t4t" (OuterVolumeSpecName: "kube-api-access-d2t4t") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "kube-api-access-d2t4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.705930 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.711397 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.712853 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.713909 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.714847 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.717377 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.717822 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.718349 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.718474 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.722943 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.733307 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.739455 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.741598 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-inventory" (OuterVolumeSpecName: "inventory") pod "ae4f8964-b104-43bb-8356-bb53a9635527" (UID: "ae4f8964-b104-43bb-8356-bb53a9635527"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.796685 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.796894 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797010 4793 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797130 4793 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797224 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797313 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797395 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2t4t\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-kube-api-access-d2t4t\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797473 4793 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797558 4793 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797641 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797730 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797807 4793 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797879 4793 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f8964-b104-43bb-8356-bb53a9635527-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:52 crc kubenswrapper[4793]: I0130 14:22:52.797958 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae4f8964-b104-43bb-8356-bb53a9635527-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.272787 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" event={"ID":"ae4f8964-b104-43bb-8356-bb53a9635527","Type":"ContainerDied","Data":"0da6000fbf46068f349a91eb8f524a9b8122da198bebf7d03c6e4893fda58193"} Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.273151 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0da6000fbf46068f349a91eb8f524a9b8122da198bebf7d03c6e4893fda58193" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.272887 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.394847 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7"] Jan 30 14:22:53 crc kubenswrapper[4793]: E0130 14:22:53.397195 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae4f8964-b104-43bb-8356-bb53a9635527" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.397242 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae4f8964-b104-43bb-8356-bb53a9635527" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.397813 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae4f8964-b104-43bb-8356-bb53a9635527" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.398700 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.401732 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.401760 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.402105 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.403527 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7"] Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.405632 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.405814 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.513175 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.513254 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.513296 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rrtv\" (UniqueName: \"kubernetes.io/projected/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-kube-api-access-7rrtv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.513333 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.513382 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.615608 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.615670 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rrtv\" (UniqueName: \"kubernetes.io/projected/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-kube-api-access-7rrtv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.615703 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.615739 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.615835 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.616589 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.620318 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.621692 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.622125 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.666740 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rrtv\" (UniqueName: \"kubernetes.io/projected/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-kube-api-access-7rrtv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45sz7\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:53 crc kubenswrapper[4793]: I0130 14:22:53.714315 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:22:54 crc kubenswrapper[4793]: I0130 14:22:54.295310 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7"] Jan 30 14:22:55 crc kubenswrapper[4793]: I0130 14:22:55.307314 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" event={"ID":"dbd66148-cdd0-4e92-9601-3ef1576a5d3f","Type":"ContainerStarted","Data":"219da4f20d3a98a397a408028d5a88362d19486413272faf80a42261aca02884"} Jan 30 14:22:55 crc kubenswrapper[4793]: I0130 14:22:55.307965 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" event={"ID":"dbd66148-cdd0-4e92-9601-3ef1576a5d3f","Type":"ContainerStarted","Data":"062659d165e41463074a05fd5501629453876dd6ce5b9a5b154ed6ee90613d8f"} Jan 30 14:23:03 crc kubenswrapper[4793]: I0130 14:23:03.398483 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:23:03 crc kubenswrapper[4793]: E0130 14:23:03.399432 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:23:14 crc kubenswrapper[4793]: I0130 14:23:14.398396 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:23:14 crc kubenswrapper[4793]: E0130 14:23:14.399494 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:23:25 crc kubenswrapper[4793]: I0130 14:23:25.398649 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:23:25 crc kubenswrapper[4793]: E0130 14:23:25.399553 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:23:37 crc kubenswrapper[4793]: I0130 14:23:37.399840 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:23:37 crc kubenswrapper[4793]: E0130 14:23:37.401397 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:23:49 crc kubenswrapper[4793]: I0130 14:23:49.398948 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:23:49 crc kubenswrapper[4793]: E0130 14:23:49.400160 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:23:50 crc kubenswrapper[4793]: I0130 14:23:50.948711 4793 scope.go:117] "RemoveContainer" containerID="87ada9a6b5346c7032748aa17aea82f42d27a30601825dfb46499a4bfb7bf949" Jan 30 14:23:50 crc kubenswrapper[4793]: I0130 14:23:50.979640 4793 scope.go:117] "RemoveContainer" containerID="97e00f686b282180edd4c6895080d4ff4fea6b3dd37684dbd36be6025541ffd0" Jan 30 14:23:51 crc kubenswrapper[4793]: I0130 14:23:51.063886 4793 scope.go:117] "RemoveContainer" containerID="085807c590a6db119c8b09a9c636c0a0db1e0e333c8a025332a79e249f76032c" Jan 30 14:24:00 crc kubenswrapper[4793]: I0130 14:24:00.410970 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:24:00 crc kubenswrapper[4793]: E0130 14:24:00.411457 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:24:02 crc kubenswrapper[4793]: I0130 14:24:02.898736 4793 generic.go:334] "Generic (PLEG): container finished" podID="dbd66148-cdd0-4e92-9601-3ef1576a5d3f" containerID="219da4f20d3a98a397a408028d5a88362d19486413272faf80a42261aca02884" exitCode=0 Jan 30 14:24:02 crc kubenswrapper[4793]: I0130 14:24:02.898830 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" event={"ID":"dbd66148-cdd0-4e92-9601-3ef1576a5d3f","Type":"ContainerDied","Data":"219da4f20d3a98a397a408028d5a88362d19486413272faf80a42261aca02884"} Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.414489 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.590434 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rrtv\" (UniqueName: \"kubernetes.io/projected/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-kube-api-access-7rrtv\") pod \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.590559 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovncontroller-config-0\") pod \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.590665 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovn-combined-ca-bundle\") pod \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.590806 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ssh-key-openstack-edpm-ipam\") pod \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.590887 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-inventory\") pod \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\" (UID: \"dbd66148-cdd0-4e92-9601-3ef1576a5d3f\") " Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.597288 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-kube-api-access-7rrtv" (OuterVolumeSpecName: "kube-api-access-7rrtv") pod "dbd66148-cdd0-4e92-9601-3ef1576a5d3f" (UID: "dbd66148-cdd0-4e92-9601-3ef1576a5d3f"). InnerVolumeSpecName "kube-api-access-7rrtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.599214 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "dbd66148-cdd0-4e92-9601-3ef1576a5d3f" (UID: "dbd66148-cdd0-4e92-9601-3ef1576a5d3f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.613505 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "dbd66148-cdd0-4e92-9601-3ef1576a5d3f" (UID: "dbd66148-cdd0-4e92-9601-3ef1576a5d3f"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.616266 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dbd66148-cdd0-4e92-9601-3ef1576a5d3f" (UID: "dbd66148-cdd0-4e92-9601-3ef1576a5d3f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.620428 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-inventory" (OuterVolumeSpecName: "inventory") pod "dbd66148-cdd0-4e92-9601-3ef1576a5d3f" (UID: "dbd66148-cdd0-4e92-9601-3ef1576a5d3f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.693432 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.693481 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.693492 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rrtv\" (UniqueName: \"kubernetes.io/projected/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-kube-api-access-7rrtv\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.693503 4793 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.693514 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbd66148-cdd0-4e92-9601-3ef1576a5d3f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.921646 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" event={"ID":"dbd66148-cdd0-4e92-9601-3ef1576a5d3f","Type":"ContainerDied","Data":"062659d165e41463074a05fd5501629453876dd6ce5b9a5b154ed6ee90613d8f"} Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.921872 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="062659d165e41463074a05fd5501629453876dd6ce5b9a5b154ed6ee90613d8f" Jan 30 14:24:04 crc kubenswrapper[4793]: I0130 14:24:04.922384 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45sz7" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.094377 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk"] Jan 30 14:24:05 crc kubenswrapper[4793]: E0130 14:24:05.094805 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbd66148-cdd0-4e92-9601-3ef1576a5d3f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.094823 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbd66148-cdd0-4e92-9601-3ef1576a5d3f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.095002 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbd66148-cdd0-4e92-9601-3ef1576a5d3f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.095749 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.098545 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.099538 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.099784 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.099937 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.099962 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.103965 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.108363 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk"] Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.202970 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.203174 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.203267 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.203305 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.203331 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.203625 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkgcw\" (UniqueName: \"kubernetes.io/projected/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-kube-api-access-hkgcw\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.306740 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.306900 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.306970 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.307036 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.307250 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkgcw\" (UniqueName: \"kubernetes.io/projected/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-kube-api-access-hkgcw\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.307507 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.311437 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.311653 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.312100 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.313109 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.314288 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.328196 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkgcw\" (UniqueName: \"kubernetes.io/projected/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-kube-api-access-hkgcw\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:05 crc kubenswrapper[4793]: I0130 14:24:05.438180 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:06 crc kubenswrapper[4793]: I0130 14:24:06.007821 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk"] Jan 30 14:24:06 crc kubenswrapper[4793]: I0130 14:24:06.954397 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" event={"ID":"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5","Type":"ContainerStarted","Data":"e55213f2fced3737de3fb3ff4602498a86b686ff3ab59fdf6509dddac24327d6"} Jan 30 14:24:07 crc kubenswrapper[4793]: I0130 14:24:07.966467 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" event={"ID":"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5","Type":"ContainerStarted","Data":"5885befe35927759b0d2ced1a2a1467580181cfae34c28239ea999f58e29a334"} Jan 30 14:24:08 crc kubenswrapper[4793]: I0130 14:24:08.001283 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" podStartSLOduration=2.210273821 podStartE2EDuration="3.001251148s" podCreationTimestamp="2026-01-30 14:24:05 +0000 UTC" firstStartedPulling="2026-01-30 14:24:06.00563902 +0000 UTC m=+2456.706987521" lastFinishedPulling="2026-01-30 14:24:06.796616337 +0000 UTC m=+2457.497964848" observedRunningTime="2026-01-30 14:24:07.987743506 +0000 UTC m=+2458.689092007" watchObservedRunningTime="2026-01-30 14:24:08.001251148 +0000 UTC m=+2458.702599659" Jan 30 14:24:12 crc kubenswrapper[4793]: I0130 14:24:12.398472 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:24:12 crc kubenswrapper[4793]: E0130 14:24:12.400459 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:24:27 crc kubenswrapper[4793]: I0130 14:24:27.398999 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:24:27 crc kubenswrapper[4793]: E0130 14:24:27.399726 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:24:39 crc kubenswrapper[4793]: I0130 14:24:39.398579 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:24:39 crc kubenswrapper[4793]: E0130 14:24:39.399502 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:24:52 crc kubenswrapper[4793]: I0130 14:24:52.398865 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:24:52 crc kubenswrapper[4793]: E0130 14:24:52.399663 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:24:57 crc kubenswrapper[4793]: I0130 14:24:57.409075 4793 generic.go:334] "Generic (PLEG): container finished" podID="92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" containerID="5885befe35927759b0d2ced1a2a1467580181cfae34c28239ea999f58e29a334" exitCode=0 Jan 30 14:24:57 crc kubenswrapper[4793]: I0130 14:24:57.409167 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" event={"ID":"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5","Type":"ContainerDied","Data":"5885befe35927759b0d2ced1a2a1467580181cfae34c28239ea999f58e29a334"} Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.819410 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.918717 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkgcw\" (UniqueName: \"kubernetes.io/projected/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-kube-api-access-hkgcw\") pod \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.919814 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-metadata-combined-ca-bundle\") pod \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.919847 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-ssh-key-openstack-edpm-ipam\") pod \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.919923 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-inventory\") pod \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.919963 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-nova-metadata-neutron-config-0\") pod \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.919986 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\" (UID: \"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5\") " Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.924975 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-kube-api-access-hkgcw" (OuterVolumeSpecName: "kube-api-access-hkgcw") pod "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" (UID: "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5"). InnerVolumeSpecName "kube-api-access-hkgcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.930315 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" (UID: "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.950834 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" (UID: "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.953233 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-inventory" (OuterVolumeSpecName: "inventory") pod "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" (UID: "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.954551 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" (UID: "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:58 crc kubenswrapper[4793]: I0130 14:24:58.958902 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" (UID: "92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.022809 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.022847 4793 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.022866 4793 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.022880 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkgcw\" (UniqueName: \"kubernetes.io/projected/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-kube-api-access-hkgcw\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.022895 4793 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.022909 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.428462 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" event={"ID":"92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5","Type":"ContainerDied","Data":"e55213f2fced3737de3fb3ff4602498a86b686ff3ab59fdf6509dddac24327d6"} Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.428559 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e55213f2fced3737de3fb3ff4602498a86b686ff3ab59fdf6509dddac24327d6" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.428581 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.553625 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2"] Jan 30 14:24:59 crc kubenswrapper[4793]: E0130 14:24:59.554162 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.554188 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.554419 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.555242 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.558759 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.559169 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.559454 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.559578 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.564770 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.571489 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2"] Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.634761 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.634830 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.634921 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.634960 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.634993 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5pnk\" (UniqueName: \"kubernetes.io/projected/96926233-9ce4-4a0b-bab4-d0c4fa90389b-kube-api-access-k5pnk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.737148 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.737234 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.737325 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.737388 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.737432 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5pnk\" (UniqueName: \"kubernetes.io/projected/96926233-9ce4-4a0b-bab4-d0c4fa90389b-kube-api-access-k5pnk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.742498 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.743103 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.745818 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.752618 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.756529 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5pnk\" (UniqueName: \"kubernetes.io/projected/96926233-9ce4-4a0b-bab4-d0c4fa90389b-kube-api-access-k5pnk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:24:59 crc kubenswrapper[4793]: I0130 14:24:59.890168 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:25:00 crc kubenswrapper[4793]: I0130 14:25:00.510962 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2"] Jan 30 14:25:01 crc kubenswrapper[4793]: I0130 14:25:01.458515 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" event={"ID":"96926233-9ce4-4a0b-bab4-d0c4fa90389b","Type":"ContainerStarted","Data":"61f0898c6128b3026d78cf3afa09780d7e497bed3bbd093ccb7f3ad49150e91f"} Jan 30 14:25:01 crc kubenswrapper[4793]: I0130 14:25:01.458563 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" event={"ID":"96926233-9ce4-4a0b-bab4-d0c4fa90389b","Type":"ContainerStarted","Data":"0bf138472118ab1f44e112f736372179f055ce03bbf973e33b87d18006a030f8"} Jan 30 14:25:01 crc kubenswrapper[4793]: I0130 14:25:01.474308 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" podStartSLOduration=1.974009822 podStartE2EDuration="2.474292974s" podCreationTimestamp="2026-01-30 14:24:59 +0000 UTC" firstStartedPulling="2026-01-30 14:25:00.522896189 +0000 UTC m=+2511.224244680" lastFinishedPulling="2026-01-30 14:25:01.023179311 +0000 UTC m=+2511.724527832" observedRunningTime="2026-01-30 14:25:01.473536466 +0000 UTC m=+2512.174884957" watchObservedRunningTime="2026-01-30 14:25:01.474292974 +0000 UTC m=+2512.175641465" Jan 30 14:25:06 crc kubenswrapper[4793]: I0130 14:25:06.401127 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:25:06 crc kubenswrapper[4793]: E0130 14:25:06.402670 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:25:19 crc kubenswrapper[4793]: I0130 14:25:19.399460 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:25:19 crc kubenswrapper[4793]: E0130 14:25:19.400338 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:25:33 crc kubenswrapper[4793]: I0130 14:25:33.398448 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:25:33 crc kubenswrapper[4793]: E0130 14:25:33.399361 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:25:48 crc kubenswrapper[4793]: I0130 14:25:48.399317 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:25:48 crc kubenswrapper[4793]: E0130 14:25:48.400207 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:26:01 crc kubenswrapper[4793]: I0130 14:26:01.398116 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:26:01 crc kubenswrapper[4793]: E0130 14:26:01.398822 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:26:13 crc kubenswrapper[4793]: I0130 14:26:13.399641 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:26:13 crc kubenswrapper[4793]: E0130 14:26:13.401036 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:26:27 crc kubenswrapper[4793]: I0130 14:26:27.398100 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:26:27 crc kubenswrapper[4793]: E0130 14:26:27.399989 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:26:39 crc kubenswrapper[4793]: I0130 14:26:39.399154 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:26:39 crc kubenswrapper[4793]: E0130 14:26:39.399926 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:26:50 crc kubenswrapper[4793]: I0130 14:26:50.406830 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:26:51 crc kubenswrapper[4793]: I0130 14:26:51.446922 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"70fb244a70a270db2f48a61c7b2320a4725cc48ffb5d0786cb6f3e83b0333e57"} Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.066710 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j9zsb"] Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.070525 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.084221 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j9zsb"] Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.172686 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfkbw\" (UniqueName: \"kubernetes.io/projected/8ac188e0-8883-4288-8574-a8388bea78d2-kube-api-access-qfkbw\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.172812 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-catalog-content\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.172863 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-utilities\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.274239 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-catalog-content\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.274314 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-utilities\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.274389 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfkbw\" (UniqueName: \"kubernetes.io/projected/8ac188e0-8883-4288-8574-a8388bea78d2-kube-api-access-qfkbw\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.275027 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-utilities\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.275031 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-catalog-content\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.296185 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfkbw\" (UniqueName: \"kubernetes.io/projected/8ac188e0-8883-4288-8574-a8388bea78d2-kube-api-access-qfkbw\") pod \"community-operators-j9zsb\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:29 crc kubenswrapper[4793]: I0130 14:27:29.386769 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:30 crc kubenswrapper[4793]: I0130 14:27:30.068065 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j9zsb"] Jan 30 14:27:30 crc kubenswrapper[4793]: I0130 14:27:30.808019 4793 generic.go:334] "Generic (PLEG): container finished" podID="8ac188e0-8883-4288-8574-a8388bea78d2" containerID="3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59" exitCode=0 Jan 30 14:27:30 crc kubenswrapper[4793]: I0130 14:27:30.808783 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9zsb" event={"ID":"8ac188e0-8883-4288-8574-a8388bea78d2","Type":"ContainerDied","Data":"3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59"} Jan 30 14:27:30 crc kubenswrapper[4793]: I0130 14:27:30.808949 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9zsb" event={"ID":"8ac188e0-8883-4288-8574-a8388bea78d2","Type":"ContainerStarted","Data":"2e6978349422c2c067899ffd8f2d73652f6c4e68208717f0207feab345d75662"} Jan 30 14:27:30 crc kubenswrapper[4793]: I0130 14:27:30.810649 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:27:31 crc kubenswrapper[4793]: I0130 14:27:31.819613 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9zsb" event={"ID":"8ac188e0-8883-4288-8574-a8388bea78d2","Type":"ContainerStarted","Data":"5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa"} Jan 30 14:27:35 crc kubenswrapper[4793]: I0130 14:27:35.872002 4793 generic.go:334] "Generic (PLEG): container finished" podID="8ac188e0-8883-4288-8574-a8388bea78d2" containerID="5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa" exitCode=0 Jan 30 14:27:35 crc kubenswrapper[4793]: I0130 14:27:35.872793 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9zsb" event={"ID":"8ac188e0-8883-4288-8574-a8388bea78d2","Type":"ContainerDied","Data":"5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa"} Jan 30 14:27:36 crc kubenswrapper[4793]: I0130 14:27:36.884859 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9zsb" event={"ID":"8ac188e0-8883-4288-8574-a8388bea78d2","Type":"ContainerStarted","Data":"e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7"} Jan 30 14:27:36 crc kubenswrapper[4793]: I0130 14:27:36.919278 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j9zsb" podStartSLOduration=2.349947378 podStartE2EDuration="7.919237533s" podCreationTimestamp="2026-01-30 14:27:29 +0000 UTC" firstStartedPulling="2026-01-30 14:27:30.810271427 +0000 UTC m=+2661.511619928" lastFinishedPulling="2026-01-30 14:27:36.379561552 +0000 UTC m=+2667.080910083" observedRunningTime="2026-01-30 14:27:36.908813047 +0000 UTC m=+2667.610161558" watchObservedRunningTime="2026-01-30 14:27:36.919237533 +0000 UTC m=+2667.620586034" Jan 30 14:27:39 crc kubenswrapper[4793]: I0130 14:27:39.387397 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:39 crc kubenswrapper[4793]: I0130 14:27:39.387841 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:39 crc kubenswrapper[4793]: I0130 14:27:39.444558 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:49 crc kubenswrapper[4793]: I0130 14:27:49.436717 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:49 crc kubenswrapper[4793]: I0130 14:27:49.500164 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j9zsb"] Jan 30 14:27:49 crc kubenswrapper[4793]: I0130 14:27:49.998185 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j9zsb" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" containerName="registry-server" containerID="cri-o://e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7" gracePeriod=2 Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.445558 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.453511 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-utilities\") pod \"8ac188e0-8883-4288-8574-a8388bea78d2\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.453580 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfkbw\" (UniqueName: \"kubernetes.io/projected/8ac188e0-8883-4288-8574-a8388bea78d2-kube-api-access-qfkbw\") pod \"8ac188e0-8883-4288-8574-a8388bea78d2\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.453686 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-catalog-content\") pod \"8ac188e0-8883-4288-8574-a8388bea78d2\" (UID: \"8ac188e0-8883-4288-8574-a8388bea78d2\") " Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.454416 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-utilities" (OuterVolumeSpecName: "utilities") pod "8ac188e0-8883-4288-8574-a8388bea78d2" (UID: "8ac188e0-8883-4288-8574-a8388bea78d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.468305 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac188e0-8883-4288-8574-a8388bea78d2-kube-api-access-qfkbw" (OuterVolumeSpecName: "kube-api-access-qfkbw") pod "8ac188e0-8883-4288-8574-a8388bea78d2" (UID: "8ac188e0-8883-4288-8574-a8388bea78d2"). InnerVolumeSpecName "kube-api-access-qfkbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.517963 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ac188e0-8883-4288-8574-a8388bea78d2" (UID: "8ac188e0-8883-4288-8574-a8388bea78d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.556622 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.556655 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac188e0-8883-4288-8574-a8388bea78d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:27:50 crc kubenswrapper[4793]: I0130 14:27:50.556670 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfkbw\" (UniqueName: \"kubernetes.io/projected/8ac188e0-8883-4288-8574-a8388bea78d2-kube-api-access-qfkbw\") on node \"crc\" DevicePath \"\"" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.013276 4793 generic.go:334] "Generic (PLEG): container finished" podID="8ac188e0-8883-4288-8574-a8388bea78d2" containerID="e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7" exitCode=0 Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.013334 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9zsb" event={"ID":"8ac188e0-8883-4288-8574-a8388bea78d2","Type":"ContainerDied","Data":"e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7"} Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.013379 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9zsb" event={"ID":"8ac188e0-8883-4288-8574-a8388bea78d2","Type":"ContainerDied","Data":"2e6978349422c2c067899ffd8f2d73652f6c4e68208717f0207feab345d75662"} Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.013403 4793 scope.go:117] "RemoveContainer" containerID="e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.013425 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j9zsb" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.039844 4793 scope.go:117] "RemoveContainer" containerID="5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.055560 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j9zsb"] Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.062510 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j9zsb"] Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.064730 4793 scope.go:117] "RemoveContainer" containerID="3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.113935 4793 scope.go:117] "RemoveContainer" containerID="e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7" Jan 30 14:27:51 crc kubenswrapper[4793]: E0130 14:27:51.114387 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7\": container with ID starting with e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7 not found: ID does not exist" containerID="e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.114429 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7"} err="failed to get container status \"e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7\": rpc error: code = NotFound desc = could not find container \"e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7\": container with ID starting with e006994e9b0e2493ff422166b1a4f2dea802573349b5087a67eec8531d9251e7 not found: ID does not exist" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.114457 4793 scope.go:117] "RemoveContainer" containerID="5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa" Jan 30 14:27:51 crc kubenswrapper[4793]: E0130 14:27:51.114736 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa\": container with ID starting with 5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa not found: ID does not exist" containerID="5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.114771 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa"} err="failed to get container status \"5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa\": rpc error: code = NotFound desc = could not find container \"5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa\": container with ID starting with 5360cf8d850293e3f3a45d17c359bcea224e59820817cf7ea7c40c8d97096caa not found: ID does not exist" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.114791 4793 scope.go:117] "RemoveContainer" containerID="3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59" Jan 30 14:27:51 crc kubenswrapper[4793]: E0130 14:27:51.115223 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59\": container with ID starting with 3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59 not found: ID does not exist" containerID="3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59" Jan 30 14:27:51 crc kubenswrapper[4793]: I0130 14:27:51.115248 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59"} err="failed to get container status \"3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59\": rpc error: code = NotFound desc = could not find container \"3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59\": container with ID starting with 3157cd0044f22962277ca0a8aaa2db39127f475302b8fd836dbebc29ccbfcb59 not found: ID does not exist" Jan 30 14:27:52 crc kubenswrapper[4793]: I0130 14:27:52.408887 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" path="/var/lib/kubelet/pods/8ac188e0-8883-4288-8574-a8388bea78d2/volumes" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.725062 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tcg6z"] Jan 30 14:27:53 crc kubenswrapper[4793]: E0130 14:27:53.725693 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" containerName="extract-utilities" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.725705 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" containerName="extract-utilities" Jan 30 14:27:53 crc kubenswrapper[4793]: E0130 14:27:53.725727 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" containerName="extract-content" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.725733 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" containerName="extract-content" Jan 30 14:27:53 crc kubenswrapper[4793]: E0130 14:27:53.725752 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" containerName="registry-server" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.725759 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" containerName="registry-server" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.725990 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac188e0-8883-4288-8574-a8388bea78d2" containerName="registry-server" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.727385 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.751021 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tcg6z"] Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.923715 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-catalog-content\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.924099 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-utilities\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:53 crc kubenswrapper[4793]: I0130 14:27:53.924193 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drbh7\" (UniqueName: \"kubernetes.io/projected/2248feb5-b64e-4fbc-8993-7d6e69082932-kube-api-access-drbh7\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:54 crc kubenswrapper[4793]: I0130 14:27:54.026370 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-utilities\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:54 crc kubenswrapper[4793]: I0130 14:27:54.026431 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drbh7\" (UniqueName: \"kubernetes.io/projected/2248feb5-b64e-4fbc-8993-7d6e69082932-kube-api-access-drbh7\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:54 crc kubenswrapper[4793]: I0130 14:27:54.026504 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-catalog-content\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:54 crc kubenswrapper[4793]: I0130 14:27:54.026969 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-utilities\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:54 crc kubenswrapper[4793]: I0130 14:27:54.027129 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-catalog-content\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:54 crc kubenswrapper[4793]: I0130 14:27:54.052015 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drbh7\" (UniqueName: \"kubernetes.io/projected/2248feb5-b64e-4fbc-8993-7d6e69082932-kube-api-access-drbh7\") pod \"redhat-operators-tcg6z\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:54 crc kubenswrapper[4793]: I0130 14:27:54.346414 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:27:54 crc kubenswrapper[4793]: I0130 14:27:54.719345 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tcg6z"] Jan 30 14:27:55 crc kubenswrapper[4793]: I0130 14:27:55.052273 4793 generic.go:334] "Generic (PLEG): container finished" podID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerID="2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0" exitCode=0 Jan 30 14:27:55 crc kubenswrapper[4793]: I0130 14:27:55.052400 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcg6z" event={"ID":"2248feb5-b64e-4fbc-8993-7d6e69082932","Type":"ContainerDied","Data":"2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0"} Jan 30 14:27:55 crc kubenswrapper[4793]: I0130 14:27:55.052587 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcg6z" event={"ID":"2248feb5-b64e-4fbc-8993-7d6e69082932","Type":"ContainerStarted","Data":"c693d9182095ee36e51a7a2bd725bebc76ec6dfb2df0b81b55aa8de3f6cfa553"} Jan 30 14:27:56 crc kubenswrapper[4793]: I0130 14:27:56.062093 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcg6z" event={"ID":"2248feb5-b64e-4fbc-8993-7d6e69082932","Type":"ContainerStarted","Data":"dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169"} Jan 30 14:28:08 crc kubenswrapper[4793]: I0130 14:28:08.181723 4793 generic.go:334] "Generic (PLEG): container finished" podID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerID="dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169" exitCode=0 Jan 30 14:28:08 crc kubenswrapper[4793]: I0130 14:28:08.181764 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcg6z" event={"ID":"2248feb5-b64e-4fbc-8993-7d6e69082932","Type":"ContainerDied","Data":"dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169"} Jan 30 14:28:12 crc kubenswrapper[4793]: I0130 14:28:12.222385 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcg6z" event={"ID":"2248feb5-b64e-4fbc-8993-7d6e69082932","Type":"ContainerStarted","Data":"63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b"} Jan 30 14:28:12 crc kubenswrapper[4793]: I0130 14:28:12.248128 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tcg6z" podStartSLOduration=2.794861712 podStartE2EDuration="19.248109189s" podCreationTimestamp="2026-01-30 14:27:53 +0000 UTC" firstStartedPulling="2026-01-30 14:27:55.053724415 +0000 UTC m=+2685.755072906" lastFinishedPulling="2026-01-30 14:28:11.506971882 +0000 UTC m=+2702.208320383" observedRunningTime="2026-01-30 14:28:12.246784226 +0000 UTC m=+2702.948132727" watchObservedRunningTime="2026-01-30 14:28:12.248109189 +0000 UTC m=+2702.949457680" Jan 30 14:28:14 crc kubenswrapper[4793]: I0130 14:28:14.346801 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:28:14 crc kubenswrapper[4793]: I0130 14:28:14.347318 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:28:15 crc kubenswrapper[4793]: I0130 14:28:15.388699 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tcg6z" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="registry-server" probeResult="failure" output=< Jan 30 14:28:15 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:28:15 crc kubenswrapper[4793]: > Jan 30 14:28:24 crc kubenswrapper[4793]: I0130 14:28:24.412281 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:28:24 crc kubenswrapper[4793]: I0130 14:28:24.480391 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:28:24 crc kubenswrapper[4793]: I0130 14:28:24.928947 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tcg6z"] Jan 30 14:28:26 crc kubenswrapper[4793]: I0130 14:28:26.336367 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tcg6z" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="registry-server" containerID="cri-o://63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b" gracePeriod=2 Jan 30 14:28:26 crc kubenswrapper[4793]: I0130 14:28:26.797916 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:28:26 crc kubenswrapper[4793]: I0130 14:28:26.938523 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drbh7\" (UniqueName: \"kubernetes.io/projected/2248feb5-b64e-4fbc-8993-7d6e69082932-kube-api-access-drbh7\") pod \"2248feb5-b64e-4fbc-8993-7d6e69082932\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " Jan 30 14:28:26 crc kubenswrapper[4793]: I0130 14:28:26.938979 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-utilities\") pod \"2248feb5-b64e-4fbc-8993-7d6e69082932\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " Jan 30 14:28:26 crc kubenswrapper[4793]: I0130 14:28:26.939094 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-catalog-content\") pod \"2248feb5-b64e-4fbc-8993-7d6e69082932\" (UID: \"2248feb5-b64e-4fbc-8993-7d6e69082932\") " Jan 30 14:28:26 crc kubenswrapper[4793]: I0130 14:28:26.941784 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-utilities" (OuterVolumeSpecName: "utilities") pod "2248feb5-b64e-4fbc-8993-7d6e69082932" (UID: "2248feb5-b64e-4fbc-8993-7d6e69082932"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:28:26 crc kubenswrapper[4793]: I0130 14:28:26.951364 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2248feb5-b64e-4fbc-8993-7d6e69082932-kube-api-access-drbh7" (OuterVolumeSpecName: "kube-api-access-drbh7") pod "2248feb5-b64e-4fbc-8993-7d6e69082932" (UID: "2248feb5-b64e-4fbc-8993-7d6e69082932"). InnerVolumeSpecName "kube-api-access-drbh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.041325 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.041371 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drbh7\" (UniqueName: \"kubernetes.io/projected/2248feb5-b64e-4fbc-8993-7d6e69082932-kube-api-access-drbh7\") on node \"crc\" DevicePath \"\"" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.064537 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2248feb5-b64e-4fbc-8993-7d6e69082932" (UID: "2248feb5-b64e-4fbc-8993-7d6e69082932"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.143148 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2248feb5-b64e-4fbc-8993-7d6e69082932-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.354632 4793 generic.go:334] "Generic (PLEG): container finished" podID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerID="63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b" exitCode=0 Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.354698 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcg6z" event={"ID":"2248feb5-b64e-4fbc-8993-7d6e69082932","Type":"ContainerDied","Data":"63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b"} Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.354723 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tcg6z" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.354755 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcg6z" event={"ID":"2248feb5-b64e-4fbc-8993-7d6e69082932","Type":"ContainerDied","Data":"c693d9182095ee36e51a7a2bd725bebc76ec6dfb2df0b81b55aa8de3f6cfa553"} Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.354779 4793 scope.go:117] "RemoveContainer" containerID="63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.385156 4793 scope.go:117] "RemoveContainer" containerID="dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.413005 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tcg6z"] Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.430458 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tcg6z"] Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.437989 4793 scope.go:117] "RemoveContainer" containerID="2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.486794 4793 scope.go:117] "RemoveContainer" containerID="63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b" Jan 30 14:28:27 crc kubenswrapper[4793]: E0130 14:28:27.487422 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b\": container with ID starting with 63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b not found: ID does not exist" containerID="63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.487472 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b"} err="failed to get container status \"63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b\": rpc error: code = NotFound desc = could not find container \"63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b\": container with ID starting with 63e9fab09e7bc5f52fa6c3fb43efa045499b10c6eff030348b17653cf0ae7e6b not found: ID does not exist" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.487508 4793 scope.go:117] "RemoveContainer" containerID="dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169" Jan 30 14:28:27 crc kubenswrapper[4793]: E0130 14:28:27.489358 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169\": container with ID starting with dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169 not found: ID does not exist" containerID="dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.489389 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169"} err="failed to get container status \"dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169\": rpc error: code = NotFound desc = could not find container \"dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169\": container with ID starting with dfd04b79280b847b6021bea0038b95210a6f413a97aa36e02f833e230a777169 not found: ID does not exist" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.489409 4793 scope.go:117] "RemoveContainer" containerID="2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0" Jan 30 14:28:27 crc kubenswrapper[4793]: E0130 14:28:27.489756 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0\": container with ID starting with 2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0 not found: ID does not exist" containerID="2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0" Jan 30 14:28:27 crc kubenswrapper[4793]: I0130 14:28:27.489787 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0"} err="failed to get container status \"2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0\": rpc error: code = NotFound desc = could not find container \"2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0\": container with ID starting with 2504c1d3a6969af975a10df1ae43d90666d87e7f13b983b1f30ca968545885f0 not found: ID does not exist" Jan 30 14:28:28 crc kubenswrapper[4793]: I0130 14:28:28.410242 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" path="/var/lib/kubelet/pods/2248feb5-b64e-4fbc-8993-7d6e69082932/volumes" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.796469 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wt9"] Jan 30 14:28:40 crc kubenswrapper[4793]: E0130 14:28:40.797542 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="registry-server" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.797568 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="registry-server" Jan 30 14:28:40 crc kubenswrapper[4793]: E0130 14:28:40.797598 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="extract-utilities" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.797608 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="extract-utilities" Jan 30 14:28:40 crc kubenswrapper[4793]: E0130 14:28:40.797638 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="extract-content" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.797647 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="extract-content" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.797930 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2248feb5-b64e-4fbc-8993-7d6e69082932" containerName="registry-server" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.799845 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.819177 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wt9"] Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.919232 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-utilities\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.919546 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-catalog-content\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:40 crc kubenswrapper[4793]: I0130 14:28:40.919608 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qm26\" (UniqueName: \"kubernetes.io/projected/c78dc643-5d9a-4998-a1a2-2a1992eaad88-kube-api-access-8qm26\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:41 crc kubenswrapper[4793]: I0130 14:28:41.021619 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-utilities\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:41 crc kubenswrapper[4793]: I0130 14:28:41.021765 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-catalog-content\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:41 crc kubenswrapper[4793]: I0130 14:28:41.021789 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qm26\" (UniqueName: \"kubernetes.io/projected/c78dc643-5d9a-4998-a1a2-2a1992eaad88-kube-api-access-8qm26\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:41 crc kubenswrapper[4793]: I0130 14:28:41.022311 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-utilities\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:41 crc kubenswrapper[4793]: I0130 14:28:41.022387 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-catalog-content\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:41 crc kubenswrapper[4793]: I0130 14:28:41.041728 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qm26\" (UniqueName: \"kubernetes.io/projected/c78dc643-5d9a-4998-a1a2-2a1992eaad88-kube-api-access-8qm26\") pod \"redhat-marketplace-q7wt9\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:41 crc kubenswrapper[4793]: I0130 14:28:41.124195 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:41 crc kubenswrapper[4793]: I0130 14:28:41.641338 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wt9"] Jan 30 14:28:42 crc kubenswrapper[4793]: I0130 14:28:42.486482 4793 generic.go:334] "Generic (PLEG): container finished" podID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerID="d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb" exitCode=0 Jan 30 14:28:42 crc kubenswrapper[4793]: I0130 14:28:42.486839 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wt9" event={"ID":"c78dc643-5d9a-4998-a1a2-2a1992eaad88","Type":"ContainerDied","Data":"d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb"} Jan 30 14:28:42 crc kubenswrapper[4793]: I0130 14:28:42.486872 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wt9" event={"ID":"c78dc643-5d9a-4998-a1a2-2a1992eaad88","Type":"ContainerStarted","Data":"bd05cc44721911bb54d243d9cfe6e7c414c9830e172625313e31e6fa71a99d40"} Jan 30 14:28:43 crc kubenswrapper[4793]: I0130 14:28:43.503945 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wt9" event={"ID":"c78dc643-5d9a-4998-a1a2-2a1992eaad88","Type":"ContainerStarted","Data":"d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8"} Jan 30 14:28:44 crc kubenswrapper[4793]: I0130 14:28:44.514256 4793 generic.go:334] "Generic (PLEG): container finished" podID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerID="d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8" exitCode=0 Jan 30 14:28:44 crc kubenswrapper[4793]: I0130 14:28:44.514607 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wt9" event={"ID":"c78dc643-5d9a-4998-a1a2-2a1992eaad88","Type":"ContainerDied","Data":"d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8"} Jan 30 14:28:45 crc kubenswrapper[4793]: I0130 14:28:45.524241 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wt9" event={"ID":"c78dc643-5d9a-4998-a1a2-2a1992eaad88","Type":"ContainerStarted","Data":"e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905"} Jan 30 14:28:45 crc kubenswrapper[4793]: I0130 14:28:45.571976 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q7wt9" podStartSLOduration=3.184672034 podStartE2EDuration="5.571953439s" podCreationTimestamp="2026-01-30 14:28:40 +0000 UTC" firstStartedPulling="2026-01-30 14:28:42.488687729 +0000 UTC m=+2733.190036220" lastFinishedPulling="2026-01-30 14:28:44.875969124 +0000 UTC m=+2735.577317625" observedRunningTime="2026-01-30 14:28:45.54800113 +0000 UTC m=+2736.249349631" watchObservedRunningTime="2026-01-30 14:28:45.571953439 +0000 UTC m=+2736.273301930" Jan 30 14:28:51 crc kubenswrapper[4793]: I0130 14:28:51.124490 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:51 crc kubenswrapper[4793]: I0130 14:28:51.125038 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:51 crc kubenswrapper[4793]: I0130 14:28:51.174245 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:51 crc kubenswrapper[4793]: I0130 14:28:51.626746 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:51 crc kubenswrapper[4793]: I0130 14:28:51.684439 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wt9"] Jan 30 14:28:53 crc kubenswrapper[4793]: I0130 14:28:53.597671 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q7wt9" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerName="registry-server" containerID="cri-o://e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905" gracePeriod=2 Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.551570 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.583418 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qm26\" (UniqueName: \"kubernetes.io/projected/c78dc643-5d9a-4998-a1a2-2a1992eaad88-kube-api-access-8qm26\") pod \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.583491 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-catalog-content\") pod \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.583538 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-utilities\") pod \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\" (UID: \"c78dc643-5d9a-4998-a1a2-2a1992eaad88\") " Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.584797 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-utilities" (OuterVolumeSpecName: "utilities") pod "c78dc643-5d9a-4998-a1a2-2a1992eaad88" (UID: "c78dc643-5d9a-4998-a1a2-2a1992eaad88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.608291 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c78dc643-5d9a-4998-a1a2-2a1992eaad88" (UID: "c78dc643-5d9a-4998-a1a2-2a1992eaad88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.616651 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c78dc643-5d9a-4998-a1a2-2a1992eaad88-kube-api-access-8qm26" (OuterVolumeSpecName: "kube-api-access-8qm26") pod "c78dc643-5d9a-4998-a1a2-2a1992eaad88" (UID: "c78dc643-5d9a-4998-a1a2-2a1992eaad88"). InnerVolumeSpecName "kube-api-access-8qm26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.663835 4793 generic.go:334] "Generic (PLEG): container finished" podID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerID="e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905" exitCode=0 Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.663878 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wt9" event={"ID":"c78dc643-5d9a-4998-a1a2-2a1992eaad88","Type":"ContainerDied","Data":"e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905"} Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.663905 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wt9" event={"ID":"c78dc643-5d9a-4998-a1a2-2a1992eaad88","Type":"ContainerDied","Data":"bd05cc44721911bb54d243d9cfe6e7c414c9830e172625313e31e6fa71a99d40"} Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.663933 4793 scope.go:117] "RemoveContainer" containerID="e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.664157 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7wt9" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.686414 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.686664 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c78dc643-5d9a-4998-a1a2-2a1992eaad88-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.686746 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qm26\" (UniqueName: \"kubernetes.io/projected/c78dc643-5d9a-4998-a1a2-2a1992eaad88-kube-api-access-8qm26\") on node \"crc\" DevicePath \"\"" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.714016 4793 scope.go:117] "RemoveContainer" containerID="d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.715664 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wt9"] Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.727342 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wt9"] Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.738140 4793 scope.go:117] "RemoveContainer" containerID="d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.789031 4793 scope.go:117] "RemoveContainer" containerID="e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905" Jan 30 14:28:54 crc kubenswrapper[4793]: E0130 14:28:54.792449 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905\": container with ID starting with e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905 not found: ID does not exist" containerID="e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.792666 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905"} err="failed to get container status \"e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905\": rpc error: code = NotFound desc = could not find container \"e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905\": container with ID starting with e7a4f4a9e5d19ae7c3fab04665adf3ea186da3d68a6e375bd2b7781e1a8c0905 not found: ID does not exist" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.792783 4793 scope.go:117] "RemoveContainer" containerID="d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8" Jan 30 14:28:54 crc kubenswrapper[4793]: E0130 14:28:54.793487 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8\": container with ID starting with d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8 not found: ID does not exist" containerID="d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.793522 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8"} err="failed to get container status \"d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8\": rpc error: code = NotFound desc = could not find container \"d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8\": container with ID starting with d32941e9fc31cf0db4051fa820064b395b72ea0fe192419f847de07faba184c8 not found: ID does not exist" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.793539 4793 scope.go:117] "RemoveContainer" containerID="d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb" Jan 30 14:28:54 crc kubenswrapper[4793]: E0130 14:28:54.795177 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb\": container with ID starting with d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb not found: ID does not exist" containerID="d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb" Jan 30 14:28:54 crc kubenswrapper[4793]: I0130 14:28:54.795277 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb"} err="failed to get container status \"d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb\": rpc error: code = NotFound desc = could not find container \"d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb\": container with ID starting with d3d8ed29dbd7a6052eefb71c2e6c4b6248af805d04ba5767f763671b3b5633fb not found: ID does not exist" Jan 30 14:28:56 crc kubenswrapper[4793]: I0130 14:28:56.407829 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" path="/var/lib/kubelet/pods/c78dc643-5d9a-4998-a1a2-2a1992eaad88/volumes" Jan 30 14:29:12 crc kubenswrapper[4793]: I0130 14:29:12.413721 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:29:12 crc kubenswrapper[4793]: I0130 14:29:12.414004 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:29:39 crc kubenswrapper[4793]: I0130 14:29:39.439854 4793 generic.go:334] "Generic (PLEG): container finished" podID="96926233-9ce4-4a0b-bab4-d0c4fa90389b" containerID="61f0898c6128b3026d78cf3afa09780d7e497bed3bbd093ccb7f3ad49150e91f" exitCode=0 Jan 30 14:29:39 crc kubenswrapper[4793]: I0130 14:29:39.440494 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" event={"ID":"96926233-9ce4-4a0b-bab4-d0c4fa90389b","Type":"ContainerDied","Data":"61f0898c6128b3026d78cf3afa09780d7e497bed3bbd093ccb7f3ad49150e91f"} Jan 30 14:29:40 crc kubenswrapper[4793]: I0130 14:29:40.848903 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:29:40 crc kubenswrapper[4793]: I0130 14:29:40.979833 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-combined-ca-bundle\") pod \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " Jan 30 14:29:40 crc kubenswrapper[4793]: I0130 14:29:40.980111 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-secret-0\") pod \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " Jan 30 14:29:40 crc kubenswrapper[4793]: I0130 14:29:40.980240 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5pnk\" (UniqueName: \"kubernetes.io/projected/96926233-9ce4-4a0b-bab4-d0c4fa90389b-kube-api-access-k5pnk\") pod \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " Jan 30 14:29:40 crc kubenswrapper[4793]: I0130 14:29:40.980291 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-inventory\") pod \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " Jan 30 14:29:40 crc kubenswrapper[4793]: I0130 14:29:40.980392 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-ssh-key-openstack-edpm-ipam\") pod \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\" (UID: \"96926233-9ce4-4a0b-bab4-d0c4fa90389b\") " Jan 30 14:29:40 crc kubenswrapper[4793]: I0130 14:29:40.986568 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96926233-9ce4-4a0b-bab4-d0c4fa90389b-kube-api-access-k5pnk" (OuterVolumeSpecName: "kube-api-access-k5pnk") pod "96926233-9ce4-4a0b-bab4-d0c4fa90389b" (UID: "96926233-9ce4-4a0b-bab4-d0c4fa90389b"). InnerVolumeSpecName "kube-api-access-k5pnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:29:40 crc kubenswrapper[4793]: I0130 14:29:40.992463 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "96926233-9ce4-4a0b-bab4-d0c4fa90389b" (UID: "96926233-9ce4-4a0b-bab4-d0c4fa90389b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.014704 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96926233-9ce4-4a0b-bab4-d0c4fa90389b" (UID: "96926233-9ce4-4a0b-bab4-d0c4fa90389b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.016222 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "96926233-9ce4-4a0b-bab4-d0c4fa90389b" (UID: "96926233-9ce4-4a0b-bab4-d0c4fa90389b"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.026298 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-inventory" (OuterVolumeSpecName: "inventory") pod "96926233-9ce4-4a0b-bab4-d0c4fa90389b" (UID: "96926233-9ce4-4a0b-bab4-d0c4fa90389b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.082946 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5pnk\" (UniqueName: \"kubernetes.io/projected/96926233-9ce4-4a0b-bab4-d0c4fa90389b-kube-api-access-k5pnk\") on node \"crc\" DevicePath \"\"" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.082995 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.083013 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.083026 4793 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.083060 4793 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/96926233-9ce4-4a0b-bab4-d0c4fa90389b-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.464683 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.464547 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2" event={"ID":"96926233-9ce4-4a0b-bab4-d0c4fa90389b","Type":"ContainerDied","Data":"0bf138472118ab1f44e112f736372179f055ce03bbf973e33b87d18006a030f8"} Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.465544 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bf138472118ab1f44e112f736372179f055ce03bbf973e33b87d18006a030f8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.558588 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8"] Jan 30 14:29:41 crc kubenswrapper[4793]: E0130 14:29:41.558986 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96926233-9ce4-4a0b-bab4-d0c4fa90389b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.559006 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="96926233-9ce4-4a0b-bab4-d0c4fa90389b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 14:29:41 crc kubenswrapper[4793]: E0130 14:29:41.559040 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerName="extract-content" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.559064 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerName="extract-content" Jan 30 14:29:41 crc kubenswrapper[4793]: E0130 14:29:41.559078 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerName="extract-utilities" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.559086 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerName="extract-utilities" Jan 30 14:29:41 crc kubenswrapper[4793]: E0130 14:29:41.559110 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerName="registry-server" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.559117 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerName="registry-server" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.559388 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="96926233-9ce4-4a0b-bab4-d0c4fa90389b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.559410 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c78dc643-5d9a-4998-a1a2-2a1992eaad88" containerName="registry-server" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.560754 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.568566 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.568606 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.568566 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.569080 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.569198 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.569241 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.569495 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.575004 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8"] Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.711874 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4bsc\" (UniqueName: \"kubernetes.io/projected/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-kube-api-access-c4bsc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.711947 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.712026 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.712122 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.712160 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.712202 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.712346 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.712483 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.712605 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.814816 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.814982 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.815025 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.816023 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.816125 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.816275 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.816596 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4bsc\" (UniqueName: \"kubernetes.io/projected/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-kube-api-access-c4bsc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.816760 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.817015 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.818591 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.821592 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.822487 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.822759 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.824192 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.824923 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.825417 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.834212 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.842246 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4bsc\" (UniqueName: \"kubernetes.io/projected/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-kube-api-access-c4bsc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sk8t8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:41 crc kubenswrapper[4793]: I0130 14:29:41.890453 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:29:42 crc kubenswrapper[4793]: I0130 14:29:42.414098 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:29:42 crc kubenswrapper[4793]: I0130 14:29:42.414439 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:29:42 crc kubenswrapper[4793]: I0130 14:29:42.460622 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8"] Jan 30 14:29:42 crc kubenswrapper[4793]: I0130 14:29:42.475330 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" event={"ID":"dfc4d2ba-0414-4f1e-8733-a75d39218ef8","Type":"ContainerStarted","Data":"35c08494f8afe2508d0796d2d7916a60b01429d9956705b3e7cc36e86561fae0"} Jan 30 14:29:43 crc kubenswrapper[4793]: I0130 14:29:43.486646 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" event={"ID":"dfc4d2ba-0414-4f1e-8733-a75d39218ef8","Type":"ContainerStarted","Data":"5e41fdf863829756b00ca7e86cc571728bb392f0583e10c4de618e692db88093"} Jan 30 14:29:43 crc kubenswrapper[4793]: I0130 14:29:43.520512 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" podStartSLOduration=2.082065362 podStartE2EDuration="2.520493733s" podCreationTimestamp="2026-01-30 14:29:41 +0000 UTC" firstStartedPulling="2026-01-30 14:29:42.469085759 +0000 UTC m=+2793.170434250" lastFinishedPulling="2026-01-30 14:29:42.90751413 +0000 UTC m=+2793.608862621" observedRunningTime="2026-01-30 14:29:43.510553969 +0000 UTC m=+2794.211902460" watchObservedRunningTime="2026-01-30 14:29:43.520493733 +0000 UTC m=+2794.221842224" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.151282 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn"] Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.153825 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.156401 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.156613 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.170852 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn"] Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.314010 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afd3a15c-5ed4-45be-8091-84573a97a63a-config-volume\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.314090 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqzmm\" (UniqueName: \"kubernetes.io/projected/afd3a15c-5ed4-45be-8091-84573a97a63a-kube-api-access-fqzmm\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.314201 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afd3a15c-5ed4-45be-8091-84573a97a63a-secret-volume\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.416100 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afd3a15c-5ed4-45be-8091-84573a97a63a-config-volume\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.416148 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqzmm\" (UniqueName: \"kubernetes.io/projected/afd3a15c-5ed4-45be-8091-84573a97a63a-kube-api-access-fqzmm\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.416214 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afd3a15c-5ed4-45be-8091-84573a97a63a-secret-volume\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.417561 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afd3a15c-5ed4-45be-8091-84573a97a63a-config-volume\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.425080 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afd3a15c-5ed4-45be-8091-84573a97a63a-secret-volume\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.439085 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqzmm\" (UniqueName: \"kubernetes.io/projected/afd3a15c-5ed4-45be-8091-84573a97a63a-kube-api-access-fqzmm\") pod \"collect-profiles-29496390-tc6sn\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.474735 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:00 crc kubenswrapper[4793]: I0130 14:30:00.956314 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn"] Jan 30 14:30:01 crc kubenswrapper[4793]: I0130 14:30:01.701579 4793 generic.go:334] "Generic (PLEG): container finished" podID="afd3a15c-5ed4-45be-8091-84573a97a63a" containerID="1def2597602a7873d34fb216db52e7e4d4963d5b5a3ca0e36a14a7576a9a797f" exitCode=0 Jan 30 14:30:01 crc kubenswrapper[4793]: I0130 14:30:01.701668 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" event={"ID":"afd3a15c-5ed4-45be-8091-84573a97a63a","Type":"ContainerDied","Data":"1def2597602a7873d34fb216db52e7e4d4963d5b5a3ca0e36a14a7576a9a797f"} Jan 30 14:30:01 crc kubenswrapper[4793]: I0130 14:30:01.701900 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" event={"ID":"afd3a15c-5ed4-45be-8091-84573a97a63a","Type":"ContainerStarted","Data":"d1bd11fd8a9e4e05f7c7410583f802caafc51abcd39d08a49ce8f8afd4d84643"} Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.036488 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.084165 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqzmm\" (UniqueName: \"kubernetes.io/projected/afd3a15c-5ed4-45be-8091-84573a97a63a-kube-api-access-fqzmm\") pod \"afd3a15c-5ed4-45be-8091-84573a97a63a\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.084245 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afd3a15c-5ed4-45be-8091-84573a97a63a-secret-volume\") pod \"afd3a15c-5ed4-45be-8091-84573a97a63a\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.084551 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afd3a15c-5ed4-45be-8091-84573a97a63a-config-volume\") pod \"afd3a15c-5ed4-45be-8091-84573a97a63a\" (UID: \"afd3a15c-5ed4-45be-8091-84573a97a63a\") " Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.085569 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afd3a15c-5ed4-45be-8091-84573a97a63a-config-volume" (OuterVolumeSpecName: "config-volume") pod "afd3a15c-5ed4-45be-8091-84573a97a63a" (UID: "afd3a15c-5ed4-45be-8091-84573a97a63a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.091670 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afd3a15c-5ed4-45be-8091-84573a97a63a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "afd3a15c-5ed4-45be-8091-84573a97a63a" (UID: "afd3a15c-5ed4-45be-8091-84573a97a63a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.091908 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd3a15c-5ed4-45be-8091-84573a97a63a-kube-api-access-fqzmm" (OuterVolumeSpecName: "kube-api-access-fqzmm") pod "afd3a15c-5ed4-45be-8091-84573a97a63a" (UID: "afd3a15c-5ed4-45be-8091-84573a97a63a"). InnerVolumeSpecName "kube-api-access-fqzmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.186979 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqzmm\" (UniqueName: \"kubernetes.io/projected/afd3a15c-5ed4-45be-8091-84573a97a63a-kube-api-access-fqzmm\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.187023 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afd3a15c-5ed4-45be-8091-84573a97a63a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.187033 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afd3a15c-5ed4-45be-8091-84573a97a63a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.720835 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" event={"ID":"afd3a15c-5ed4-45be-8091-84573a97a63a","Type":"ContainerDied","Data":"d1bd11fd8a9e4e05f7c7410583f802caafc51abcd39d08a49ce8f8afd4d84643"} Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.721505 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1bd11fd8a9e4e05f7c7410583f802caafc51abcd39d08a49ce8f8afd4d84643" Jan 30 14:30:03 crc kubenswrapper[4793]: I0130 14:30:03.721585 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn" Jan 30 14:30:04 crc kubenswrapper[4793]: I0130 14:30:04.123737 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7"] Jan 30 14:30:04 crc kubenswrapper[4793]: I0130 14:30:04.131458 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-xbqs7"] Jan 30 14:30:04 crc kubenswrapper[4793]: I0130 14:30:04.420831 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6db0dcc6-874c-40f9-a0b7-309149c78f48" path="/var/lib/kubelet/pods/6db0dcc6-874c-40f9-a0b7-309149c78f48/volumes" Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.413438 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.414231 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.414300 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.415587 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"70fb244a70a270db2f48a61c7b2320a4725cc48ffb5d0786cb6f3e83b0333e57"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.415746 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://70fb244a70a270db2f48a61c7b2320a4725cc48ffb5d0786cb6f3e83b0333e57" gracePeriod=600 Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.801364 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="70fb244a70a270db2f48a61c7b2320a4725cc48ffb5d0786cb6f3e83b0333e57" exitCode=0 Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.801439 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"70fb244a70a270db2f48a61c7b2320a4725cc48ffb5d0786cb6f3e83b0333e57"} Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.801720 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff"} Jan 30 14:30:12 crc kubenswrapper[4793]: I0130 14:30:12.801745 4793 scope.go:117] "RemoveContainer" containerID="f30a4597062ea0a625435ce06b65c6ac08d0d1498da9a9eee23cf28c4d547c19" Jan 30 14:30:51 crc kubenswrapper[4793]: I0130 14:30:51.294426 4793 scope.go:117] "RemoveContainer" containerID="0003a0f96b0d450dcabcfae0a5907ebc6be8013da3e854ca4f0bce212cb173a6" Jan 30 14:30:57 crc kubenswrapper[4793]: I0130 14:30:57.831510 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.479582 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jtvvp"] Jan 30 14:31:45 crc kubenswrapper[4793]: E0130 14:31:45.480544 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd3a15c-5ed4-45be-8091-84573a97a63a" containerName="collect-profiles" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.480559 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd3a15c-5ed4-45be-8091-84573a97a63a" containerName="collect-profiles" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.480828 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd3a15c-5ed4-45be-8091-84573a97a63a" containerName="collect-profiles" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.482600 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.496781 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jtvvp"] Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.558339 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-catalog-content\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.558596 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-utilities\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.558642 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6lq9\" (UniqueName: \"kubernetes.io/projected/741a3bc2-86fb-4c08-9403-71f9900d2685-kube-api-access-h6lq9\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.660735 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-catalog-content\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.660821 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-utilities\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.660882 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6lq9\" (UniqueName: \"kubernetes.io/projected/741a3bc2-86fb-4c08-9403-71f9900d2685-kube-api-access-h6lq9\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.661322 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-catalog-content\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.661405 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-utilities\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.690768 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6lq9\" (UniqueName: \"kubernetes.io/projected/741a3bc2-86fb-4c08-9403-71f9900d2685-kube-api-access-h6lq9\") pod \"certified-operators-jtvvp\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:45 crc kubenswrapper[4793]: I0130 14:31:45.806079 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:46 crc kubenswrapper[4793]: I0130 14:31:46.449502 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jtvvp"] Jan 30 14:31:46 crc kubenswrapper[4793]: I0130 14:31:46.648529 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtvvp" event={"ID":"741a3bc2-86fb-4c08-9403-71f9900d2685","Type":"ContainerStarted","Data":"6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c"} Jan 30 14:31:46 crc kubenswrapper[4793]: I0130 14:31:46.648845 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtvvp" event={"ID":"741a3bc2-86fb-4c08-9403-71f9900d2685","Type":"ContainerStarted","Data":"8c884434f855c40a03540f0ffa1d304bd12ee3704e243d46631a685f83a6e054"} Jan 30 14:31:47 crc kubenswrapper[4793]: I0130 14:31:47.659499 4793 generic.go:334] "Generic (PLEG): container finished" podID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerID="6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c" exitCode=0 Jan 30 14:31:47 crc kubenswrapper[4793]: I0130 14:31:47.659548 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtvvp" event={"ID":"741a3bc2-86fb-4c08-9403-71f9900d2685","Type":"ContainerDied","Data":"6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c"} Jan 30 14:31:48 crc kubenswrapper[4793]: I0130 14:31:48.671087 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtvvp" event={"ID":"741a3bc2-86fb-4c08-9403-71f9900d2685","Type":"ContainerStarted","Data":"32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0"} Jan 30 14:31:51 crc kubenswrapper[4793]: I0130 14:31:51.703577 4793 generic.go:334] "Generic (PLEG): container finished" podID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerID="32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0" exitCode=0 Jan 30 14:31:51 crc kubenswrapper[4793]: I0130 14:31:51.703626 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtvvp" event={"ID":"741a3bc2-86fb-4c08-9403-71f9900d2685","Type":"ContainerDied","Data":"32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0"} Jan 30 14:31:52 crc kubenswrapper[4793]: I0130 14:31:52.713682 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtvvp" event={"ID":"741a3bc2-86fb-4c08-9403-71f9900d2685","Type":"ContainerStarted","Data":"b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d"} Jan 30 14:31:52 crc kubenswrapper[4793]: I0130 14:31:52.747964 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jtvvp" podStartSLOduration=3.283443355 podStartE2EDuration="7.747939521s" podCreationTimestamp="2026-01-30 14:31:45 +0000 UTC" firstStartedPulling="2026-01-30 14:31:47.661184803 +0000 UTC m=+2918.362533294" lastFinishedPulling="2026-01-30 14:31:52.125680969 +0000 UTC m=+2922.827029460" observedRunningTime="2026-01-30 14:31:52.7406187 +0000 UTC m=+2923.441967211" watchObservedRunningTime="2026-01-30 14:31:52.747939521 +0000 UTC m=+2923.449288012" Jan 30 14:31:55 crc kubenswrapper[4793]: I0130 14:31:55.807241 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:55 crc kubenswrapper[4793]: I0130 14:31:55.807678 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:31:55 crc kubenswrapper[4793]: I0130 14:31:55.862218 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:32:05 crc kubenswrapper[4793]: I0130 14:32:05.854818 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:32:05 crc kubenswrapper[4793]: I0130 14:32:05.906894 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jtvvp"] Jan 30 14:32:06 crc kubenswrapper[4793]: I0130 14:32:06.861654 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jtvvp" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerName="registry-server" containerID="cri-o://b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d" gracePeriod=2 Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.339783 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.416154 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-utilities\") pod \"741a3bc2-86fb-4c08-9403-71f9900d2685\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.416281 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-catalog-content\") pod \"741a3bc2-86fb-4c08-9403-71f9900d2685\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.416352 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6lq9\" (UniqueName: \"kubernetes.io/projected/741a3bc2-86fb-4c08-9403-71f9900d2685-kube-api-access-h6lq9\") pod \"741a3bc2-86fb-4c08-9403-71f9900d2685\" (UID: \"741a3bc2-86fb-4c08-9403-71f9900d2685\") " Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.417243 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-utilities" (OuterVolumeSpecName: "utilities") pod "741a3bc2-86fb-4c08-9403-71f9900d2685" (UID: "741a3bc2-86fb-4c08-9403-71f9900d2685"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.434235 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/741a3bc2-86fb-4c08-9403-71f9900d2685-kube-api-access-h6lq9" (OuterVolumeSpecName: "kube-api-access-h6lq9") pod "741a3bc2-86fb-4c08-9403-71f9900d2685" (UID: "741a3bc2-86fb-4c08-9403-71f9900d2685"). InnerVolumeSpecName "kube-api-access-h6lq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.476463 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "741a3bc2-86fb-4c08-9403-71f9900d2685" (UID: "741a3bc2-86fb-4c08-9403-71f9900d2685"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.518799 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.519083 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/741a3bc2-86fb-4c08-9403-71f9900d2685-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.519155 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6lq9\" (UniqueName: \"kubernetes.io/projected/741a3bc2-86fb-4c08-9403-71f9900d2685-kube-api-access-h6lq9\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.880586 4793 generic.go:334] "Generic (PLEG): container finished" podID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerID="b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d" exitCode=0 Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.880643 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtvvp" event={"ID":"741a3bc2-86fb-4c08-9403-71f9900d2685","Type":"ContainerDied","Data":"b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d"} Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.882403 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtvvp" event={"ID":"741a3bc2-86fb-4c08-9403-71f9900d2685","Type":"ContainerDied","Data":"8c884434f855c40a03540f0ffa1d304bd12ee3704e243d46631a685f83a6e054"} Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.880667 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtvvp" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.882795 4793 scope.go:117] "RemoveContainer" containerID="b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.931906 4793 scope.go:117] "RemoveContainer" containerID="32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0" Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.939805 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jtvvp"] Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.951433 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jtvvp"] Jan 30 14:32:07 crc kubenswrapper[4793]: I0130 14:32:07.971439 4793 scope.go:117] "RemoveContainer" containerID="6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c" Jan 30 14:32:08 crc kubenswrapper[4793]: I0130 14:32:08.018910 4793 scope.go:117] "RemoveContainer" containerID="b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d" Jan 30 14:32:08 crc kubenswrapper[4793]: E0130 14:32:08.019459 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d\": container with ID starting with b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d not found: ID does not exist" containerID="b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d" Jan 30 14:32:08 crc kubenswrapper[4793]: I0130 14:32:08.019752 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d"} err="failed to get container status \"b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d\": rpc error: code = NotFound desc = could not find container \"b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d\": container with ID starting with b3f96ca5db4f8a1cf2f0614f8662cd51b20560b04c3a91d92ecd6f2711898a4d not found: ID does not exist" Jan 30 14:32:08 crc kubenswrapper[4793]: I0130 14:32:08.019779 4793 scope.go:117] "RemoveContainer" containerID="32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0" Jan 30 14:32:08 crc kubenswrapper[4793]: E0130 14:32:08.020138 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0\": container with ID starting with 32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0 not found: ID does not exist" containerID="32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0" Jan 30 14:32:08 crc kubenswrapper[4793]: I0130 14:32:08.020174 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0"} err="failed to get container status \"32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0\": rpc error: code = NotFound desc = could not find container \"32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0\": container with ID starting with 32ce143f08bde6b6b5c3c97836e19008a94f4c1635c91f5a2d0d2e1c500372a0 not found: ID does not exist" Jan 30 14:32:08 crc kubenswrapper[4793]: I0130 14:32:08.020196 4793 scope.go:117] "RemoveContainer" containerID="6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c" Jan 30 14:32:08 crc kubenswrapper[4793]: E0130 14:32:08.020529 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c\": container with ID starting with 6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c not found: ID does not exist" containerID="6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c" Jan 30 14:32:08 crc kubenswrapper[4793]: I0130 14:32:08.020552 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c"} err="failed to get container status \"6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c\": rpc error: code = NotFound desc = could not find container \"6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c\": container with ID starting with 6d178b1727198141d0b8cab8ec7fe6fa68bf9c258ac38df4621d7c97a6872b8c not found: ID does not exist" Jan 30 14:32:08 crc kubenswrapper[4793]: I0130 14:32:08.411616 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" path="/var/lib/kubelet/pods/741a3bc2-86fb-4c08-9403-71f9900d2685/volumes" Jan 30 14:32:12 crc kubenswrapper[4793]: I0130 14:32:12.414149 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:32:12 crc kubenswrapper[4793]: I0130 14:32:12.414656 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:32:19 crc kubenswrapper[4793]: I0130 14:32:19.999228 4793 generic.go:334] "Generic (PLEG): container finished" podID="dfc4d2ba-0414-4f1e-8733-a75d39218ef8" containerID="5e41fdf863829756b00ca7e86cc571728bb392f0583e10c4de618e692db88093" exitCode=0 Jan 30 14:32:19 crc kubenswrapper[4793]: I0130 14:32:19.999347 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" event={"ID":"dfc4d2ba-0414-4f1e-8733-a75d39218ef8","Type":"ContainerDied","Data":"5e41fdf863829756b00ca7e86cc571728bb392f0583e10c4de618e692db88093"} Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.459460 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629353 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-inventory\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629414 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4bsc\" (UniqueName: \"kubernetes.io/projected/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-kube-api-access-c4bsc\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629435 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-ssh-key-openstack-edpm-ipam\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629453 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-0\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629473 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-1\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629553 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-extra-config-0\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629577 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-0\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629599 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-1\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.629620 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-combined-ca-bundle\") pod \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\" (UID: \"dfc4d2ba-0414-4f1e-8733-a75d39218ef8\") " Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.638468 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-kube-api-access-c4bsc" (OuterVolumeSpecName: "kube-api-access-c4bsc") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "kube-api-access-c4bsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.638666 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.655578 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.660428 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.662925 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.668365 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.676424 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.690188 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.705737 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-inventory" (OuterVolumeSpecName: "inventory") pod "dfc4d2ba-0414-4f1e-8733-a75d39218ef8" (UID: "dfc4d2ba-0414-4f1e-8733-a75d39218ef8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731123 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731159 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4bsc\" (UniqueName: \"kubernetes.io/projected/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-kube-api-access-c4bsc\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731174 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731187 4793 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731198 4793 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731207 4793 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731216 4793 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731224 4793 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:21 crc kubenswrapper[4793]: I0130 14:32:21.731232 4793 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc4d2ba-0414-4f1e-8733-a75d39218ef8-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.016401 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" event={"ID":"dfc4d2ba-0414-4f1e-8733-a75d39218ef8","Type":"ContainerDied","Data":"35c08494f8afe2508d0796d2d7916a60b01429d9956705b3e7cc36e86561fae0"} Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.016452 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35c08494f8afe2508d0796d2d7916a60b01429d9956705b3e7cc36e86561fae0" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.016502 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sk8t8" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.293985 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb"] Jan 30 14:32:22 crc kubenswrapper[4793]: E0130 14:32:22.294618 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerName="registry-server" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.294687 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerName="registry-server" Jan 30 14:32:22 crc kubenswrapper[4793]: E0130 14:32:22.294760 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerName="extract-content" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.294811 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerName="extract-content" Jan 30 14:32:22 crc kubenswrapper[4793]: E0130 14:32:22.294879 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerName="extract-utilities" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.294937 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerName="extract-utilities" Jan 30 14:32:22 crc kubenswrapper[4793]: E0130 14:32:22.294995 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc4d2ba-0414-4f1e-8733-a75d39218ef8" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.295079 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc4d2ba-0414-4f1e-8733-a75d39218ef8" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.295595 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="741a3bc2-86fb-4c08-9403-71f9900d2685" containerName="registry-server" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.295696 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc4d2ba-0414-4f1e-8733-a75d39218ef8" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.296483 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.300257 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.300623 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.300783 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qq6vk" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.300952 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.301208 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.316818 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb"] Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.342405 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw7hc\" (UniqueName: \"kubernetes.io/projected/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-kube-api-access-hw7hc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.342565 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.342602 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.342717 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.342818 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.342941 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.343020 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.444796 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.445738 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.445793 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.445898 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw7hc\" (UniqueName: \"kubernetes.io/projected/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-kube-api-access-hw7hc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.446009 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.446037 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.446140 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.448934 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.449403 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.451893 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.452772 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.453081 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.453733 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.470909 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw7hc\" (UniqueName: \"kubernetes.io/projected/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-kube-api-access-hw7hc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:22 crc kubenswrapper[4793]: I0130 14:32:22.626068 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:32:23 crc kubenswrapper[4793]: I0130 14:32:23.141881 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb"] Jan 30 14:32:24 crc kubenswrapper[4793]: I0130 14:32:24.040783 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" event={"ID":"8b1317e1-63f1-4b06-aa31-5df5459c6ce6","Type":"ContainerStarted","Data":"a64d90e6e708916bddb2fb85fc43ea11a1f35f9eae3151af244a63d85665315a"} Jan 30 14:32:24 crc kubenswrapper[4793]: I0130 14:32:24.041130 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" event={"ID":"8b1317e1-63f1-4b06-aa31-5df5459c6ce6","Type":"ContainerStarted","Data":"34ff75da3ef3d1a97297c8bba1b71ad20c81e8b1c9fef9fb1b215b54b7a4a0d3"} Jan 30 14:32:24 crc kubenswrapper[4793]: I0130 14:32:24.064490 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" podStartSLOduration=1.629312266 podStartE2EDuration="2.064472558s" podCreationTimestamp="2026-01-30 14:32:22 +0000 UTC" firstStartedPulling="2026-01-30 14:32:23.153714062 +0000 UTC m=+2953.855062553" lastFinishedPulling="2026-01-30 14:32:23.588874354 +0000 UTC m=+2954.290222845" observedRunningTime="2026-01-30 14:32:24.061356952 +0000 UTC m=+2954.762705473" watchObservedRunningTime="2026-01-30 14:32:24.064472558 +0000 UTC m=+2954.765821049" Jan 30 14:32:42 crc kubenswrapper[4793]: I0130 14:32:42.413296 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:32:42 crc kubenswrapper[4793]: I0130 14:32:42.413919 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:33:12 crc kubenswrapper[4793]: I0130 14:33:12.413240 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:33:12 crc kubenswrapper[4793]: I0130 14:33:12.413819 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:33:12 crc kubenswrapper[4793]: I0130 14:33:12.413864 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:33:12 crc kubenswrapper[4793]: I0130 14:33:12.414572 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:33:12 crc kubenswrapper[4793]: I0130 14:33:12.414629 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" gracePeriod=600 Jan 30 14:33:13 crc kubenswrapper[4793]: E0130 14:33:13.152953 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:33:13 crc kubenswrapper[4793]: I0130 14:33:13.486281 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" exitCode=0 Jan 30 14:33:13 crc kubenswrapper[4793]: I0130 14:33:13.486340 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff"} Jan 30 14:33:13 crc kubenswrapper[4793]: I0130 14:33:13.486386 4793 scope.go:117] "RemoveContainer" containerID="70fb244a70a270db2f48a61c7b2320a4725cc48ffb5d0786cb6f3e83b0333e57" Jan 30 14:33:13 crc kubenswrapper[4793]: I0130 14:33:13.487158 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:33:13 crc kubenswrapper[4793]: E0130 14:33:13.487429 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:33:28 crc kubenswrapper[4793]: I0130 14:33:28.399386 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:33:28 crc kubenswrapper[4793]: E0130 14:33:28.400145 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:33:40 crc kubenswrapper[4793]: I0130 14:33:40.405404 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:33:40 crc kubenswrapper[4793]: E0130 14:33:40.406138 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:33:54 crc kubenswrapper[4793]: I0130 14:33:54.398080 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:33:54 crc kubenswrapper[4793]: E0130 14:33:54.398792 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:34:09 crc kubenswrapper[4793]: I0130 14:34:09.398383 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:34:09 crc kubenswrapper[4793]: E0130 14:34:09.399190 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:34:23 crc kubenswrapper[4793]: I0130 14:34:23.398396 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:34:23 crc kubenswrapper[4793]: E0130 14:34:23.399114 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:34:38 crc kubenswrapper[4793]: I0130 14:34:38.398860 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:34:38 crc kubenswrapper[4793]: E0130 14:34:38.399649 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:34:49 crc kubenswrapper[4793]: I0130 14:34:49.398554 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:34:49 crc kubenswrapper[4793]: E0130 14:34:49.400693 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:35:02 crc kubenswrapper[4793]: I0130 14:35:02.399238 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:35:02 crc kubenswrapper[4793]: E0130 14:35:02.400319 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:35:13 crc kubenswrapper[4793]: I0130 14:35:13.399308 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:35:13 crc kubenswrapper[4793]: E0130 14:35:13.400503 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:35:28 crc kubenswrapper[4793]: I0130 14:35:28.398810 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:35:28 crc kubenswrapper[4793]: E0130 14:35:28.399608 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:35:40 crc kubenswrapper[4793]: I0130 14:35:40.774515 4793 generic.go:334] "Generic (PLEG): container finished" podID="8b1317e1-63f1-4b06-aa31-5df5459c6ce6" containerID="a64d90e6e708916bddb2fb85fc43ea11a1f35f9eae3151af244a63d85665315a" exitCode=0 Jan 30 14:35:40 crc kubenswrapper[4793]: I0130 14:35:40.774641 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" event={"ID":"8b1317e1-63f1-4b06-aa31-5df5459c6ce6","Type":"ContainerDied","Data":"a64d90e6e708916bddb2fb85fc43ea11a1f35f9eae3151af244a63d85665315a"} Jan 30 14:35:41 crc kubenswrapper[4793]: I0130 14:35:41.399005 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:35:41 crc kubenswrapper[4793]: E0130 14:35:41.399246 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.205658 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.321849 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-inventory\") pod \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.321933 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-0\") pod \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.321989 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw7hc\" (UniqueName: \"kubernetes.io/projected/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-kube-api-access-hw7hc\") pod \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.322064 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-telemetry-combined-ca-bundle\") pod \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.322166 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-1\") pod \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.322193 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-2\") pod \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.322265 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ssh-key-openstack-edpm-ipam\") pod \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\" (UID: \"8b1317e1-63f1-4b06-aa31-5df5459c6ce6\") " Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.327891 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "8b1317e1-63f1-4b06-aa31-5df5459c6ce6" (UID: "8b1317e1-63f1-4b06-aa31-5df5459c6ce6"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.339642 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-kube-api-access-hw7hc" (OuterVolumeSpecName: "kube-api-access-hw7hc") pod "8b1317e1-63f1-4b06-aa31-5df5459c6ce6" (UID: "8b1317e1-63f1-4b06-aa31-5df5459c6ce6"). InnerVolumeSpecName "kube-api-access-hw7hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.356133 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "8b1317e1-63f1-4b06-aa31-5df5459c6ce6" (UID: "8b1317e1-63f1-4b06-aa31-5df5459c6ce6"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.356366 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8b1317e1-63f1-4b06-aa31-5df5459c6ce6" (UID: "8b1317e1-63f1-4b06-aa31-5df5459c6ce6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.356665 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "8b1317e1-63f1-4b06-aa31-5df5459c6ce6" (UID: "8b1317e1-63f1-4b06-aa31-5df5459c6ce6"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.359965 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "8b1317e1-63f1-4b06-aa31-5df5459c6ce6" (UID: "8b1317e1-63f1-4b06-aa31-5df5459c6ce6"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.375814 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-inventory" (OuterVolumeSpecName: "inventory") pod "8b1317e1-63f1-4b06-aa31-5df5459c6ce6" (UID: "8b1317e1-63f1-4b06-aa31-5df5459c6ce6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.429340 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.429378 4793 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.429421 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw7hc\" (UniqueName: \"kubernetes.io/projected/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-kube-api-access-hw7hc\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.429437 4793 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.429449 4793 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.429464 4793 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.429612 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b1317e1-63f1-4b06-aa31-5df5459c6ce6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.798815 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" event={"ID":"8b1317e1-63f1-4b06-aa31-5df5459c6ce6","Type":"ContainerDied","Data":"34ff75da3ef3d1a97297c8bba1b71ad20c81e8b1c9fef9fb1b215b54b7a4a0d3"} Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.798866 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34ff75da3ef3d1a97297c8bba1b71ad20c81e8b1c9fef9fb1b215b54b7a4a0d3" Jan 30 14:35:42 crc kubenswrapper[4793]: I0130 14:35:42.798874 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb" Jan 30 14:35:52 crc kubenswrapper[4793]: I0130 14:35:52.398245 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:35:52 crc kubenswrapper[4793]: E0130 14:35:52.399141 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:36:04 crc kubenswrapper[4793]: I0130 14:36:04.398680 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:36:04 crc kubenswrapper[4793]: E0130 14:36:04.400332 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:36:19 crc kubenswrapper[4793]: I0130 14:36:19.398214 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:36:19 crc kubenswrapper[4793]: E0130 14:36:19.398919 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:36:34 crc kubenswrapper[4793]: I0130 14:36:34.398266 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:36:34 crc kubenswrapper[4793]: E0130 14:36:34.399211 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.179080 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 14:36:49 crc kubenswrapper[4793]: E0130 14:36:49.180172 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1317e1-63f1-4b06-aa31-5df5459c6ce6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.180194 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1317e1-63f1-4b06-aa31-5df5459c6ce6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.180405 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1317e1-63f1-4b06-aa31-5df5459c6ce6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.181185 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.183937 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.183994 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.184188 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.184627 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9sb9w" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.208548 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.323724 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-579bt\" (UniqueName: \"kubernetes.io/projected/4bf53e2d-d024-4526-ada2-0ee6b461babb-kube-api-access-579bt\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.323791 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.323819 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.323841 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.323864 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-config-data\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.323888 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.323925 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.323990 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.324158 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.398907 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:36:49 crc kubenswrapper[4793]: E0130 14:36:49.399236 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426296 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426350 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-579bt\" (UniqueName: \"kubernetes.io/projected/4bf53e2d-d024-4526-ada2-0ee6b461babb-kube-api-access-579bt\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426372 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426390 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426411 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426433 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-config-data\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426452 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426482 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426505 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426819 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.426884 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.427673 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.427744 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-config-data\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.427806 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.432545 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.433392 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.442283 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-579bt\" (UniqueName: \"kubernetes.io/projected/4bf53e2d-d024-4526-ada2-0ee6b461babb-kube-api-access-579bt\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.447797 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.455128 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " pod="openstack/tempest-tests-tempest" Jan 30 14:36:49 crc kubenswrapper[4793]: I0130 14:36:49.501199 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 14:36:50 crc kubenswrapper[4793]: I0130 14:36:50.003004 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 14:36:50 crc kubenswrapper[4793]: I0130 14:36:50.003857 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:36:50 crc kubenswrapper[4793]: I0130 14:36:50.396366 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4bf53e2d-d024-4526-ada2-0ee6b461babb","Type":"ContainerStarted","Data":"55c6a2b8062403d0e3d82dc5615fa6326ff29a1fce4fe5257e5d197c6f2071cb"} Jan 30 14:37:04 crc kubenswrapper[4793]: I0130 14:37:04.402786 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:37:04 crc kubenswrapper[4793]: E0130 14:37:04.477146 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:37:19 crc kubenswrapper[4793]: I0130 14:37:19.076103 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-lm7l8" podUID="e88efb4a-1489-4847-adb4-230a8b5db6ef" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.78:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 14:37:19 crc kubenswrapper[4793]: I0130 14:37:19.973339 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:37:19 crc kubenswrapper[4793]: E0130 14:37:19.997765 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:37:32 crc kubenswrapper[4793]: I0130 14:37:32.398162 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:37:32 crc kubenswrapper[4793]: E0130 14:37:32.398920 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:37:41 crc kubenswrapper[4793]: E0130 14:37:41.112676 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 30 14:37:41 crc kubenswrapper[4793]: E0130 14:37:41.113390 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-579bt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(4bf53e2d-d024-4526-ada2-0ee6b461babb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 14:37:41 crc kubenswrapper[4793]: E0130 14:37:41.115415 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="4bf53e2d-d024-4526-ada2-0ee6b461babb" Jan 30 14:37:41 crc kubenswrapper[4793]: E0130 14:37:41.198381 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="4bf53e2d-d024-4526-ada2-0ee6b461babb" Jan 30 14:37:43 crc kubenswrapper[4793]: I0130 14:37:43.399171 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:37:43 crc kubenswrapper[4793]: E0130 14:37:43.399834 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:37:55 crc kubenswrapper[4793]: I0130 14:37:55.251541 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 14:37:58 crc kubenswrapper[4793]: I0130 14:37:58.414076 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:37:58 crc kubenswrapper[4793]: E0130 14:37:58.415104 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:37:58 crc kubenswrapper[4793]: I0130 14:37:58.491971 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4bf53e2d-d024-4526-ada2-0ee6b461babb","Type":"ContainerStarted","Data":"d89fe0491771c7c6f955e91e1925c9e0d02dd442783163c9438dbd9b02ce47d9"} Jan 30 14:37:58 crc kubenswrapper[4793]: I0130 14:37:58.533902 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.289722724 podStartE2EDuration="1m10.533889233s" podCreationTimestamp="2026-01-30 14:36:48 +0000 UTC" firstStartedPulling="2026-01-30 14:36:50.003607353 +0000 UTC m=+3220.704955854" lastFinishedPulling="2026-01-30 14:37:55.247773872 +0000 UTC m=+3285.949122363" observedRunningTime="2026-01-30 14:37:58.533027362 +0000 UTC m=+3289.234375873" watchObservedRunningTime="2026-01-30 14:37:58.533889233 +0000 UTC m=+3289.235237724" Jan 30 14:38:13 crc kubenswrapper[4793]: I0130 14:38:13.399263 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:38:14 crc kubenswrapper[4793]: I0130 14:38:14.644465 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"3b40ff1ad28b890993e7464fb184af4aaf6269d300ea0eb233400b2a844450cc"} Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.700260 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8zg8s"] Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.703488 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.778950 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8zg8s"] Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.783967 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-catalog-content\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.784112 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-utilities\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.784236 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lhrk\" (UniqueName: \"kubernetes.io/projected/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-kube-api-access-7lhrk\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.885677 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-utilities\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.885834 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lhrk\" (UniqueName: \"kubernetes.io/projected/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-kube-api-access-7lhrk\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.885974 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-catalog-content\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.906903 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-utilities\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.906952 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-catalog-content\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:31 crc kubenswrapper[4793]: I0130 14:38:31.923884 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lhrk\" (UniqueName: \"kubernetes.io/projected/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-kube-api-access-7lhrk\") pod \"community-operators-8zg8s\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:32 crc kubenswrapper[4793]: I0130 14:38:32.022586 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:33 crc kubenswrapper[4793]: I0130 14:38:33.253291 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8zg8s"] Jan 30 14:38:33 crc kubenswrapper[4793]: I0130 14:38:33.971984 4793 generic.go:334] "Generic (PLEG): container finished" podID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerID="b03772cc1fe623304aa850d2ae3e7a880985ec5280b330df6c3f217d693baf92" exitCode=0 Jan 30 14:38:33 crc kubenswrapper[4793]: I0130 14:38:33.972180 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg8s" event={"ID":"262ecbe3-59ce-4b01-988f-fdffe2abbeaf","Type":"ContainerDied","Data":"b03772cc1fe623304aa850d2ae3e7a880985ec5280b330df6c3f217d693baf92"} Jan 30 14:38:33 crc kubenswrapper[4793]: I0130 14:38:33.972339 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg8s" event={"ID":"262ecbe3-59ce-4b01-988f-fdffe2abbeaf","Type":"ContainerStarted","Data":"e316ead69d15b12075fc9f1b6e2697a44e33133531f74ce11960699c1bb8a38d"} Jan 30 14:38:35 crc kubenswrapper[4793]: I0130 14:38:35.988649 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg8s" event={"ID":"262ecbe3-59ce-4b01-988f-fdffe2abbeaf","Type":"ContainerStarted","Data":"2acaac3fee7d377a8aa22b9ec1b7e360c30b74520e70444e839063c6ac86c617"} Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.445364 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cwwtp"] Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.447673 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.466915 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwwtp"] Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.571783 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-utilities\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.572104 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghnn4\" (UniqueName: \"kubernetes.io/projected/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-kube-api-access-ghnn4\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.572250 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-catalog-content\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.674365 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-utilities\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.674614 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghnn4\" (UniqueName: \"kubernetes.io/projected/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-kube-api-access-ghnn4\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.674642 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-catalog-content\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.674878 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-utilities\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.675116 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-catalog-content\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.717309 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghnn4\" (UniqueName: \"kubernetes.io/projected/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-kube-api-access-ghnn4\") pod \"redhat-marketplace-cwwtp\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:40 crc kubenswrapper[4793]: I0130 14:38:40.809029 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:38:41 crc kubenswrapper[4793]: I0130 14:38:41.148374 4793 generic.go:334] "Generic (PLEG): container finished" podID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerID="2acaac3fee7d377a8aa22b9ec1b7e360c30b74520e70444e839063c6ac86c617" exitCode=0 Jan 30 14:38:41 crc kubenswrapper[4793]: I0130 14:38:41.148414 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg8s" event={"ID":"262ecbe3-59ce-4b01-988f-fdffe2abbeaf","Type":"ContainerDied","Data":"2acaac3fee7d377a8aa22b9ec1b7e360c30b74520e70444e839063c6ac86c617"} Jan 30 14:38:43 crc kubenswrapper[4793]: I0130 14:38:43.046837 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwwtp"] Jan 30 14:38:43 crc kubenswrapper[4793]: W0130 14:38:43.077269 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabb9b1ca_f2f2_4d59_91d8_f6c5b0ce4615.slice/crio-41afcbc731f9ad086daffdad7b5355d636cf0021a0552a0c1fbc3b5f3f242e45 WatchSource:0}: Error finding container 41afcbc731f9ad086daffdad7b5355d636cf0021a0552a0c1fbc3b5f3f242e45: Status 404 returned error can't find the container with id 41afcbc731f9ad086daffdad7b5355d636cf0021a0552a0c1fbc3b5f3f242e45 Jan 30 14:38:43 crc kubenswrapper[4793]: I0130 14:38:43.164183 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwwtp" event={"ID":"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615","Type":"ContainerStarted","Data":"41afcbc731f9ad086daffdad7b5355d636cf0021a0552a0c1fbc3b5f3f242e45"} Jan 30 14:38:43 crc kubenswrapper[4793]: I0130 14:38:43.168833 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg8s" event={"ID":"262ecbe3-59ce-4b01-988f-fdffe2abbeaf","Type":"ContainerStarted","Data":"73896ac0ada401c9f8dc61d946fc97d1cee80216dbe5f2029090a2926d4eddea"} Jan 30 14:38:43 crc kubenswrapper[4793]: I0130 14:38:43.188090 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8zg8s" podStartSLOduration=3.475555224 podStartE2EDuration="12.188068097s" podCreationTimestamp="2026-01-30 14:38:31 +0000 UTC" firstStartedPulling="2026-01-30 14:38:33.97371159 +0000 UTC m=+3324.675060081" lastFinishedPulling="2026-01-30 14:38:42.686224463 +0000 UTC m=+3333.387572954" observedRunningTime="2026-01-30 14:38:43.18285622 +0000 UTC m=+3333.884204721" watchObservedRunningTime="2026-01-30 14:38:43.188068097 +0000 UTC m=+3333.889416598" Jan 30 14:38:44 crc kubenswrapper[4793]: I0130 14:38:44.181072 4793 generic.go:334] "Generic (PLEG): container finished" podID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerID="358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4" exitCode=0 Jan 30 14:38:44 crc kubenswrapper[4793]: I0130 14:38:44.181160 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwwtp" event={"ID":"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615","Type":"ContainerDied","Data":"358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4"} Jan 30 14:38:46 crc kubenswrapper[4793]: I0130 14:38:46.198383 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwwtp" event={"ID":"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615","Type":"ContainerStarted","Data":"8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d"} Jan 30 14:38:52 crc kubenswrapper[4793]: I0130 14:38:52.024149 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:52 crc kubenswrapper[4793]: I0130 14:38:52.024905 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:52 crc kubenswrapper[4793]: I0130 14:38:52.116513 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:52 crc kubenswrapper[4793]: I0130 14:38:52.278649 4793 generic.go:334] "Generic (PLEG): container finished" podID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerID="8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d" exitCode=0 Jan 30 14:38:52 crc kubenswrapper[4793]: I0130 14:38:52.279681 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwwtp" event={"ID":"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615","Type":"ContainerDied","Data":"8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d"} Jan 30 14:38:52 crc kubenswrapper[4793]: I0130 14:38:52.425586 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:53 crc kubenswrapper[4793]: I0130 14:38:53.371140 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8zg8s"] Jan 30 14:38:54 crc kubenswrapper[4793]: I0130 14:38:54.297270 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8zg8s" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerName="registry-server" containerID="cri-o://73896ac0ada401c9f8dc61d946fc97d1cee80216dbe5f2029090a2926d4eddea" gracePeriod=2 Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.306930 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwwtp" event={"ID":"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615","Type":"ContainerStarted","Data":"15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d"} Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.309364 4793 generic.go:334] "Generic (PLEG): container finished" podID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerID="73896ac0ada401c9f8dc61d946fc97d1cee80216dbe5f2029090a2926d4eddea" exitCode=0 Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.309405 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg8s" event={"ID":"262ecbe3-59ce-4b01-988f-fdffe2abbeaf","Type":"ContainerDied","Data":"73896ac0ada401c9f8dc61d946fc97d1cee80216dbe5f2029090a2926d4eddea"} Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.340862 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cwwtp" podStartSLOduration=5.299429996 podStartE2EDuration="15.340843109s" podCreationTimestamp="2026-01-30 14:38:40 +0000 UTC" firstStartedPulling="2026-01-30 14:38:44.183993528 +0000 UTC m=+3334.885342019" lastFinishedPulling="2026-01-30 14:38:54.225406641 +0000 UTC m=+3344.926755132" observedRunningTime="2026-01-30 14:38:55.329336306 +0000 UTC m=+3346.030684817" watchObservedRunningTime="2026-01-30 14:38:55.340843109 +0000 UTC m=+3346.042191600" Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.704350 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.838766 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lhrk\" (UniqueName: \"kubernetes.io/projected/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-kube-api-access-7lhrk\") pod \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.839144 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-catalog-content\") pod \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.839266 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-utilities\") pod \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\" (UID: \"262ecbe3-59ce-4b01-988f-fdffe2abbeaf\") " Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.839957 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-utilities" (OuterVolumeSpecName: "utilities") pod "262ecbe3-59ce-4b01-988f-fdffe2abbeaf" (UID: "262ecbe3-59ce-4b01-988f-fdffe2abbeaf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.853290 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-kube-api-access-7lhrk" (OuterVolumeSpecName: "kube-api-access-7lhrk") pod "262ecbe3-59ce-4b01-988f-fdffe2abbeaf" (UID: "262ecbe3-59ce-4b01-988f-fdffe2abbeaf"). InnerVolumeSpecName "kube-api-access-7lhrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.889789 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "262ecbe3-59ce-4b01-988f-fdffe2abbeaf" (UID: "262ecbe3-59ce-4b01-988f-fdffe2abbeaf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.941507 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.941544 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:38:55 crc kubenswrapper[4793]: I0130 14:38:55.941554 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lhrk\" (UniqueName: \"kubernetes.io/projected/262ecbe3-59ce-4b01-988f-fdffe2abbeaf-kube-api-access-7lhrk\") on node \"crc\" DevicePath \"\"" Jan 30 14:38:56 crc kubenswrapper[4793]: I0130 14:38:56.341359 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg8s" event={"ID":"262ecbe3-59ce-4b01-988f-fdffe2abbeaf","Type":"ContainerDied","Data":"e316ead69d15b12075fc9f1b6e2697a44e33133531f74ce11960699c1bb8a38d"} Jan 30 14:38:56 crc kubenswrapper[4793]: I0130 14:38:56.341438 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zg8s" Jan 30 14:38:56 crc kubenswrapper[4793]: I0130 14:38:56.341465 4793 scope.go:117] "RemoveContainer" containerID="73896ac0ada401c9f8dc61d946fc97d1cee80216dbe5f2029090a2926d4eddea" Jan 30 14:38:56 crc kubenswrapper[4793]: I0130 14:38:56.375018 4793 scope.go:117] "RemoveContainer" containerID="2acaac3fee7d377a8aa22b9ec1b7e360c30b74520e70444e839063c6ac86c617" Jan 30 14:38:56 crc kubenswrapper[4793]: I0130 14:38:56.403291 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8zg8s"] Jan 30 14:38:56 crc kubenswrapper[4793]: I0130 14:38:56.413022 4793 scope.go:117] "RemoveContainer" containerID="b03772cc1fe623304aa850d2ae3e7a880985ec5280b330df6c3f217d693baf92" Jan 30 14:38:56 crc kubenswrapper[4793]: I0130 14:38:56.431244 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8zg8s"] Jan 30 14:38:58 crc kubenswrapper[4793]: I0130 14:38:58.409169 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" path="/var/lib/kubelet/pods/262ecbe3-59ce-4b01-988f-fdffe2abbeaf/volumes" Jan 30 14:39:00 crc kubenswrapper[4793]: I0130 14:39:00.809920 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:39:00 crc kubenswrapper[4793]: I0130 14:39:00.810205 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:39:01 crc kubenswrapper[4793]: I0130 14:39:01.859979 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-cwwtp" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="registry-server" probeResult="failure" output=< Jan 30 14:39:01 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:39:01 crc kubenswrapper[4793]: > Jan 30 14:39:10 crc kubenswrapper[4793]: I0130 14:39:10.868027 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:39:10 crc kubenswrapper[4793]: I0130 14:39:10.919623 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:39:11 crc kubenswrapper[4793]: I0130 14:39:11.654357 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwwtp"] Jan 30 14:39:12 crc kubenswrapper[4793]: I0130 14:39:12.531663 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cwwtp" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="registry-server" containerID="cri-o://15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d" gracePeriod=2 Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.188254 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.317785 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghnn4\" (UniqueName: \"kubernetes.io/projected/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-kube-api-access-ghnn4\") pod \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.317962 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-catalog-content\") pod \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.318064 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-utilities\") pod \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\" (UID: \"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615\") " Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.319427 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-utilities" (OuterVolumeSpecName: "utilities") pod "abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" (UID: "abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.329974 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-kube-api-access-ghnn4" (OuterVolumeSpecName: "kube-api-access-ghnn4") pod "abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" (UID: "abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615"). InnerVolumeSpecName "kube-api-access-ghnn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.349170 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" (UID: "abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.420082 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.420118 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.420129 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghnn4\" (UniqueName: \"kubernetes.io/projected/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615-kube-api-access-ghnn4\") on node \"crc\" DevicePath \"\"" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.542840 4793 generic.go:334] "Generic (PLEG): container finished" podID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerID="15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d" exitCode=0 Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.542901 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwwtp" event={"ID":"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615","Type":"ContainerDied","Data":"15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d"} Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.542929 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwwtp" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.542986 4793 scope.go:117] "RemoveContainer" containerID="15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.542970 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwwtp" event={"ID":"abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615","Type":"ContainerDied","Data":"41afcbc731f9ad086daffdad7b5355d636cf0021a0552a0c1fbc3b5f3f242e45"} Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.569630 4793 scope.go:117] "RemoveContainer" containerID="8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.614404 4793 scope.go:117] "RemoveContainer" containerID="358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.637243 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwwtp"] Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.649919 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwwtp"] Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.654559 4793 scope.go:117] "RemoveContainer" containerID="15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d" Jan 30 14:39:13 crc kubenswrapper[4793]: E0130 14:39:13.655007 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d\": container with ID starting with 15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d not found: ID does not exist" containerID="15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.655070 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d"} err="failed to get container status \"15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d\": rpc error: code = NotFound desc = could not find container \"15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d\": container with ID starting with 15e33bb084ef4978e284b035570f5fc474378f76b06605025ca596009541a57d not found: ID does not exist" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.655091 4793 scope.go:117] "RemoveContainer" containerID="8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d" Jan 30 14:39:13 crc kubenswrapper[4793]: E0130 14:39:13.655379 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d\": container with ID starting with 8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d not found: ID does not exist" containerID="8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.655416 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d"} err="failed to get container status \"8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d\": rpc error: code = NotFound desc = could not find container \"8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d\": container with ID starting with 8fc29c0e94ef6281a74522bf422afc3b88d507516066ee395143512ea530004d not found: ID does not exist" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.655436 4793 scope.go:117] "RemoveContainer" containerID="358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4" Jan 30 14:39:13 crc kubenswrapper[4793]: E0130 14:39:13.655697 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4\": container with ID starting with 358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4 not found: ID does not exist" containerID="358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4" Jan 30 14:39:13 crc kubenswrapper[4793]: I0130 14:39:13.655725 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4"} err="failed to get container status \"358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4\": rpc error: code = NotFound desc = could not find container \"358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4\": container with ID starting with 358f8d48b606ba587619fee997593cbf40dd5bfc13824e7644c60aa18c968ec4 not found: ID does not exist" Jan 30 14:39:14 crc kubenswrapper[4793]: I0130 14:39:14.411824 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" path="/var/lib/kubelet/pods/abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615/volumes" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.435119 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d22cv"] Jan 30 14:39:38 crc kubenswrapper[4793]: E0130 14:39:38.435909 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerName="extract-content" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.435920 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerName="extract-content" Jan 30 14:39:38 crc kubenswrapper[4793]: E0130 14:39:38.435937 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerName="registry-server" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.435944 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerName="registry-server" Jan 30 14:39:38 crc kubenswrapper[4793]: E0130 14:39:38.435956 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="registry-server" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.435962 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="registry-server" Jan 30 14:39:38 crc kubenswrapper[4793]: E0130 14:39:38.435980 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="extract-content" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.435986 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="extract-content" Jan 30 14:39:38 crc kubenswrapper[4793]: E0130 14:39:38.436008 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="extract-utilities" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.436013 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="extract-utilities" Jan 30 14:39:38 crc kubenswrapper[4793]: E0130 14:39:38.436024 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerName="extract-utilities" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.436029 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerName="extract-utilities" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.436197 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="abb9b1ca-f2f2-4d59-91d8-f6c5b0ce4615" containerName="registry-server" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.436217 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="262ecbe3-59ce-4b01-988f-fdffe2abbeaf" containerName="registry-server" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.437479 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.459108 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d22cv"] Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.520090 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72cx2\" (UniqueName: \"kubernetes.io/projected/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-kube-api-access-72cx2\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.520167 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-utilities\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.520244 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-catalog-content\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.622359 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72cx2\" (UniqueName: \"kubernetes.io/projected/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-kube-api-access-72cx2\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.622437 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-utilities\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.622516 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-catalog-content\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.623108 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-catalog-content\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.623110 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-utilities\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.643482 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72cx2\" (UniqueName: \"kubernetes.io/projected/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-kube-api-access-72cx2\") pod \"redhat-operators-d22cv\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:38 crc kubenswrapper[4793]: I0130 14:39:38.759171 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:39 crc kubenswrapper[4793]: I0130 14:39:39.250658 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d22cv"] Jan 30 14:39:39 crc kubenswrapper[4793]: I0130 14:39:39.778135 4793 generic.go:334] "Generic (PLEG): container finished" podID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerID="45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20" exitCode=0 Jan 30 14:39:39 crc kubenswrapper[4793]: I0130 14:39:39.778270 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d22cv" event={"ID":"c91d9b4c-8c51-4d39-883a-e0911bde0ad9","Type":"ContainerDied","Data":"45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20"} Jan 30 14:39:39 crc kubenswrapper[4793]: I0130 14:39:39.778441 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d22cv" event={"ID":"c91d9b4c-8c51-4d39-883a-e0911bde0ad9","Type":"ContainerStarted","Data":"37e9623922456531cfc7cc936a8aa3fa6f702e72bc6a0a5f3f985a532c534c40"} Jan 30 14:39:40 crc kubenswrapper[4793]: I0130 14:39:40.786610 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d22cv" event={"ID":"c91d9b4c-8c51-4d39-883a-e0911bde0ad9","Type":"ContainerStarted","Data":"30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29"} Jan 30 14:39:49 crc kubenswrapper[4793]: I0130 14:39:49.871536 4793 generic.go:334] "Generic (PLEG): container finished" podID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerID="30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29" exitCode=0 Jan 30 14:39:49 crc kubenswrapper[4793]: I0130 14:39:49.871602 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d22cv" event={"ID":"c91d9b4c-8c51-4d39-883a-e0911bde0ad9","Type":"ContainerDied","Data":"30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29"} Jan 30 14:39:50 crc kubenswrapper[4793]: I0130 14:39:50.914277 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d22cv" event={"ID":"c91d9b4c-8c51-4d39-883a-e0911bde0ad9","Type":"ContainerStarted","Data":"b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f"} Jan 30 14:39:50 crc kubenswrapper[4793]: I0130 14:39:50.939717 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d22cv" podStartSLOduration=2.399260297 podStartE2EDuration="12.939666178s" podCreationTimestamp="2026-01-30 14:39:38 +0000 UTC" firstStartedPulling="2026-01-30 14:39:39.780267273 +0000 UTC m=+3390.481615764" lastFinishedPulling="2026-01-30 14:39:50.320673154 +0000 UTC m=+3401.022021645" observedRunningTime="2026-01-30 14:39:50.934805448 +0000 UTC m=+3401.636153949" watchObservedRunningTime="2026-01-30 14:39:50.939666178 +0000 UTC m=+3401.641014669" Jan 30 14:39:58 crc kubenswrapper[4793]: I0130 14:39:58.759502 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:58 crc kubenswrapper[4793]: I0130 14:39:58.760087 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:39:59 crc kubenswrapper[4793]: I0130 14:39:59.807776 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d22cv" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="registry-server" probeResult="failure" output=< Jan 30 14:39:59 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:39:59 crc kubenswrapper[4793]: > Jan 30 14:40:08 crc kubenswrapper[4793]: I0130 14:40:08.808390 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:40:08 crc kubenswrapper[4793]: I0130 14:40:08.861071 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:40:09 crc kubenswrapper[4793]: I0130 14:40:09.637739 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d22cv"] Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.076800 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d22cv" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="registry-server" containerID="cri-o://b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f" gracePeriod=2 Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.828037 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.866983 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-catalog-content\") pod \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.867251 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-utilities\") pod \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.867307 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72cx2\" (UniqueName: \"kubernetes.io/projected/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-kube-api-access-72cx2\") pod \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\" (UID: \"c91d9b4c-8c51-4d39-883a-e0911bde0ad9\") " Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.875869 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-kube-api-access-72cx2" (OuterVolumeSpecName: "kube-api-access-72cx2") pod "c91d9b4c-8c51-4d39-883a-e0911bde0ad9" (UID: "c91d9b4c-8c51-4d39-883a-e0911bde0ad9"). InnerVolumeSpecName "kube-api-access-72cx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.876466 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-utilities" (OuterVolumeSpecName: "utilities") pod "c91d9b4c-8c51-4d39-883a-e0911bde0ad9" (UID: "c91d9b4c-8c51-4d39-883a-e0911bde0ad9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.969684 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:40:10 crc kubenswrapper[4793]: I0130 14:40:10.969733 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72cx2\" (UniqueName: \"kubernetes.io/projected/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-kube-api-access-72cx2\") on node \"crc\" DevicePath \"\"" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.040456 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c91d9b4c-8c51-4d39-883a-e0911bde0ad9" (UID: "c91d9b4c-8c51-4d39-883a-e0911bde0ad9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.071434 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c91d9b4c-8c51-4d39-883a-e0911bde0ad9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.087992 4793 generic.go:334] "Generic (PLEG): container finished" podID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerID="b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f" exitCode=0 Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.088035 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d22cv" event={"ID":"c91d9b4c-8c51-4d39-883a-e0911bde0ad9","Type":"ContainerDied","Data":"b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f"} Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.089743 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d22cv" event={"ID":"c91d9b4c-8c51-4d39-883a-e0911bde0ad9","Type":"ContainerDied","Data":"37e9623922456531cfc7cc936a8aa3fa6f702e72bc6a0a5f3f985a532c534c40"} Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.088117 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d22cv" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.089788 4793 scope.go:117] "RemoveContainer" containerID="b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.114915 4793 scope.go:117] "RemoveContainer" containerID="30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.138881 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d22cv"] Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.147193 4793 scope.go:117] "RemoveContainer" containerID="45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.147969 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d22cv"] Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.183512 4793 scope.go:117] "RemoveContainer" containerID="b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f" Jan 30 14:40:11 crc kubenswrapper[4793]: E0130 14:40:11.184032 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f\": container with ID starting with b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f not found: ID does not exist" containerID="b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.184118 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f"} err="failed to get container status \"b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f\": rpc error: code = NotFound desc = could not find container \"b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f\": container with ID starting with b1421a9f31cbce8ebee8cd2be06699f2970165256bb4d9d843105dfb99332d3f not found: ID does not exist" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.184154 4793 scope.go:117] "RemoveContainer" containerID="30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29" Jan 30 14:40:11 crc kubenswrapper[4793]: E0130 14:40:11.184576 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29\": container with ID starting with 30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29 not found: ID does not exist" containerID="30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.184602 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29"} err="failed to get container status \"30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29\": rpc error: code = NotFound desc = could not find container \"30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29\": container with ID starting with 30a4e7fa9eb3571fb8898931ab1fc707aa327b59d2520eb1cd980287c607fe29 not found: ID does not exist" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.184616 4793 scope.go:117] "RemoveContainer" containerID="45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20" Jan 30 14:40:11 crc kubenswrapper[4793]: E0130 14:40:11.184858 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20\": container with ID starting with 45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20 not found: ID does not exist" containerID="45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20" Jan 30 14:40:11 crc kubenswrapper[4793]: I0130 14:40:11.184878 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20"} err="failed to get container status \"45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20\": rpc error: code = NotFound desc = could not find container \"45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20\": container with ID starting with 45cf288b9581002266dd0e8445b53faa48fcdbf6e7636166362361886d3cec20 not found: ID does not exist" Jan 30 14:40:12 crc kubenswrapper[4793]: I0130 14:40:12.411425 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" path="/var/lib/kubelet/pods/c91d9b4c-8c51-4d39-883a-e0911bde0ad9/volumes" Jan 30 14:40:42 crc kubenswrapper[4793]: I0130 14:40:42.413406 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:40:42 crc kubenswrapper[4793]: I0130 14:40:42.414007 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:41:12 crc kubenswrapper[4793]: I0130 14:41:12.413281 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:41:12 crc kubenswrapper[4793]: I0130 14:41:12.413758 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.421814 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.422469 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.422753 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.423504 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b40ff1ad28b890993e7464fb184af4aaf6269d300ea0eb233400b2a844450cc"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.423561 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://3b40ff1ad28b890993e7464fb184af4aaf6269d300ea0eb233400b2a844450cc" gracePeriod=600 Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.921490 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="3b40ff1ad28b890993e7464fb184af4aaf6269d300ea0eb233400b2a844450cc" exitCode=0 Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.921964 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"3b40ff1ad28b890993e7464fb184af4aaf6269d300ea0eb233400b2a844450cc"} Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.922077 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716"} Jan 30 14:41:42 crc kubenswrapper[4793]: I0130 14:41:42.922158 4793 scope.go:117] "RemoveContainer" containerID="50e7a31a10b239a7d221468f819a82997c008ad0310bd9e127e109220e4645ff" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.251535 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kwkmg"] Jan 30 14:41:50 crc kubenswrapper[4793]: E0130 14:41:50.252463 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="extract-utilities" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.252479 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="extract-utilities" Jan 30 14:41:50 crc kubenswrapper[4793]: E0130 14:41:50.252521 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="extract-content" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.252530 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="extract-content" Jan 30 14:41:50 crc kubenswrapper[4793]: E0130 14:41:50.252561 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="registry-server" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.252569 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="registry-server" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.252805 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c91d9b4c-8c51-4d39-883a-e0911bde0ad9" containerName="registry-server" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.254308 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.283100 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwkmg"] Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.331547 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-catalog-content\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.331633 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-utilities\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.331728 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pltqx\" (UniqueName: \"kubernetes.io/projected/eaf6755c-f96b-44cd-a05b-10f4420c18b8-kube-api-access-pltqx\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.433461 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-utilities\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.433798 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pltqx\" (UniqueName: \"kubernetes.io/projected/eaf6755c-f96b-44cd-a05b-10f4420c18b8-kube-api-access-pltqx\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.434241 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-catalog-content\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.434613 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-utilities\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.434648 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-catalog-content\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.473785 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pltqx\" (UniqueName: \"kubernetes.io/projected/eaf6755c-f96b-44cd-a05b-10f4420c18b8-kube-api-access-pltqx\") pod \"certified-operators-kwkmg\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:50 crc kubenswrapper[4793]: I0130 14:41:50.587715 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:41:51 crc kubenswrapper[4793]: I0130 14:41:51.194447 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwkmg"] Jan 30 14:41:52 crc kubenswrapper[4793]: I0130 14:41:52.007718 4793 generic.go:334] "Generic (PLEG): container finished" podID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerID="02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac" exitCode=0 Jan 30 14:41:52 crc kubenswrapper[4793]: I0130 14:41:52.008023 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwkmg" event={"ID":"eaf6755c-f96b-44cd-a05b-10f4420c18b8","Type":"ContainerDied","Data":"02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac"} Jan 30 14:41:52 crc kubenswrapper[4793]: I0130 14:41:52.008058 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwkmg" event={"ID":"eaf6755c-f96b-44cd-a05b-10f4420c18b8","Type":"ContainerStarted","Data":"c153764a949f50d21d71def364eb8bcb1b9bbda31f3f770f7a6cbb2167fdd2b3"} Jan 30 14:41:52 crc kubenswrapper[4793]: I0130 14:41:52.014749 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:41:54 crc kubenswrapper[4793]: I0130 14:41:54.026158 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwkmg" event={"ID":"eaf6755c-f96b-44cd-a05b-10f4420c18b8","Type":"ContainerStarted","Data":"aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251"} Jan 30 14:42:01 crc kubenswrapper[4793]: I0130 14:42:01.092950 4793 generic.go:334] "Generic (PLEG): container finished" podID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerID="aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251" exitCode=0 Jan 30 14:42:01 crc kubenswrapper[4793]: I0130 14:42:01.093035 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwkmg" event={"ID":"eaf6755c-f96b-44cd-a05b-10f4420c18b8","Type":"ContainerDied","Data":"aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251"} Jan 30 14:42:07 crc kubenswrapper[4793]: I0130 14:42:07.145494 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwkmg" event={"ID":"eaf6755c-f96b-44cd-a05b-10f4420c18b8","Type":"ContainerStarted","Data":"cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3"} Jan 30 14:42:07 crc kubenswrapper[4793]: I0130 14:42:07.168729 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kwkmg" podStartSLOduration=2.390315969 podStartE2EDuration="17.1687071s" podCreationTimestamp="2026-01-30 14:41:50 +0000 UTC" firstStartedPulling="2026-01-30 14:41:52.014267612 +0000 UTC m=+3522.715616103" lastFinishedPulling="2026-01-30 14:42:06.792658743 +0000 UTC m=+3537.494007234" observedRunningTime="2026-01-30 14:42:07.163164804 +0000 UTC m=+3537.864513305" watchObservedRunningTime="2026-01-30 14:42:07.1687071 +0000 UTC m=+3537.870055591" Jan 30 14:42:10 crc kubenswrapper[4793]: I0130 14:42:10.587903 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:42:10 crc kubenswrapper[4793]: I0130 14:42:10.589115 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:42:11 crc kubenswrapper[4793]: I0130 14:42:11.642473 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kwkmg" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="registry-server" probeResult="failure" output=< Jan 30 14:42:11 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:42:11 crc kubenswrapper[4793]: > Jan 30 14:42:21 crc kubenswrapper[4793]: I0130 14:42:21.636370 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kwkmg" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="registry-server" probeResult="failure" output=< Jan 30 14:42:21 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:42:21 crc kubenswrapper[4793]: > Jan 30 14:42:31 crc kubenswrapper[4793]: I0130 14:42:31.634947 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kwkmg" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="registry-server" probeResult="failure" output=< Jan 30 14:42:31 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:42:31 crc kubenswrapper[4793]: > Jan 30 14:42:40 crc kubenswrapper[4793]: I0130 14:42:40.633646 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:42:40 crc kubenswrapper[4793]: I0130 14:42:40.686470 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:42:40 crc kubenswrapper[4793]: I0130 14:42:40.873396 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kwkmg"] Jan 30 14:42:42 crc kubenswrapper[4793]: I0130 14:42:42.457422 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kwkmg" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="registry-server" containerID="cri-o://cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3" gracePeriod=2 Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.133509 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.291645 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-utilities\") pod \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.291885 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pltqx\" (UniqueName: \"kubernetes.io/projected/eaf6755c-f96b-44cd-a05b-10f4420c18b8-kube-api-access-pltqx\") pod \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.291975 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-catalog-content\") pod \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\" (UID: \"eaf6755c-f96b-44cd-a05b-10f4420c18b8\") " Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.292598 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-utilities" (OuterVolumeSpecName: "utilities") pod "eaf6755c-f96b-44cd-a05b-10f4420c18b8" (UID: "eaf6755c-f96b-44cd-a05b-10f4420c18b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.298126 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.298300 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaf6755c-f96b-44cd-a05b-10f4420c18b8-kube-api-access-pltqx" (OuterVolumeSpecName: "kube-api-access-pltqx") pod "eaf6755c-f96b-44cd-a05b-10f4420c18b8" (UID: "eaf6755c-f96b-44cd-a05b-10f4420c18b8"). InnerVolumeSpecName "kube-api-access-pltqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.346715 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eaf6755c-f96b-44cd-a05b-10f4420c18b8" (UID: "eaf6755c-f96b-44cd-a05b-10f4420c18b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.399676 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pltqx\" (UniqueName: \"kubernetes.io/projected/eaf6755c-f96b-44cd-a05b-10f4420c18b8-kube-api-access-pltqx\") on node \"crc\" DevicePath \"\"" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.399890 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaf6755c-f96b-44cd-a05b-10f4420c18b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.468933 4793 generic.go:334] "Generic (PLEG): container finished" podID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerID="cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3" exitCode=0 Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.468973 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwkmg" event={"ID":"eaf6755c-f96b-44cd-a05b-10f4420c18b8","Type":"ContainerDied","Data":"cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3"} Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.469005 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwkmg" event={"ID":"eaf6755c-f96b-44cd-a05b-10f4420c18b8","Type":"ContainerDied","Data":"c153764a949f50d21d71def364eb8bcb1b9bbda31f3f770f7a6cbb2167fdd2b3"} Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.469027 4793 scope.go:117] "RemoveContainer" containerID="cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.469211 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwkmg" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.507379 4793 scope.go:117] "RemoveContainer" containerID="aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.536624 4793 scope.go:117] "RemoveContainer" containerID="02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.537770 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kwkmg"] Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.551669 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kwkmg"] Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.581715 4793 scope.go:117] "RemoveContainer" containerID="cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3" Jan 30 14:42:43 crc kubenswrapper[4793]: E0130 14:42:43.582408 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3\": container with ID starting with cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3 not found: ID does not exist" containerID="cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.582465 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3"} err="failed to get container status \"cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3\": rpc error: code = NotFound desc = could not find container \"cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3\": container with ID starting with cb708c3205e58afa1b96accabbffc0524336a3bdac6c6154b9c635a42365e3b3 not found: ID does not exist" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.582493 4793 scope.go:117] "RemoveContainer" containerID="aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251" Jan 30 14:42:43 crc kubenswrapper[4793]: E0130 14:42:43.582777 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251\": container with ID starting with aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251 not found: ID does not exist" containerID="aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.582819 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251"} err="failed to get container status \"aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251\": rpc error: code = NotFound desc = could not find container \"aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251\": container with ID starting with aecc0468872e5a8ee6c3ae8ff39e262ab749e967b12ba4eed34afac2650ff251 not found: ID does not exist" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.582846 4793 scope.go:117] "RemoveContainer" containerID="02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac" Jan 30 14:42:43 crc kubenswrapper[4793]: E0130 14:42:43.583230 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac\": container with ID starting with 02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac not found: ID does not exist" containerID="02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac" Jan 30 14:42:43 crc kubenswrapper[4793]: I0130 14:42:43.583254 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac"} err="failed to get container status \"02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac\": rpc error: code = NotFound desc = could not find container \"02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac\": container with ID starting with 02bf158d1881f878076dfdd74dd2f57252e5313dfe910bceb675d24731c9ccac not found: ID does not exist" Jan 30 14:42:44 crc kubenswrapper[4793]: I0130 14:42:44.408841 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" path="/var/lib/kubelet/pods/eaf6755c-f96b-44cd-a05b-10f4420c18b8/volumes" Jan 30 14:43:42 crc kubenswrapper[4793]: I0130 14:43:42.414358 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:43:42 crc kubenswrapper[4793]: I0130 14:43:42.414956 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:44:12 crc kubenswrapper[4793]: I0130 14:44:12.413948 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:44:12 crc kubenswrapper[4793]: I0130 14:44:12.414589 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:44:42 crc kubenswrapper[4793]: I0130 14:44:42.413215 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:44:42 crc kubenswrapper[4793]: I0130 14:44:42.413747 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:44:42 crc kubenswrapper[4793]: I0130 14:44:42.418703 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:44:42 crc kubenswrapper[4793]: I0130 14:44:42.419523 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:44:42 crc kubenswrapper[4793]: I0130 14:44:42.419599 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" gracePeriod=600 Jan 30 14:44:42 crc kubenswrapper[4793]: I0130 14:44:42.628738 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" exitCode=0 Jan 30 14:44:42 crc kubenswrapper[4793]: I0130 14:44:42.628786 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716"} Jan 30 14:44:42 crc kubenswrapper[4793]: I0130 14:44:42.628823 4793 scope.go:117] "RemoveContainer" containerID="3b40ff1ad28b890993e7464fb184af4aaf6269d300ea0eb233400b2a844450cc" Jan 30 14:44:42 crc kubenswrapper[4793]: E0130 14:44:42.898614 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:44:43 crc kubenswrapper[4793]: I0130 14:44:43.640019 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:44:43 crc kubenswrapper[4793]: E0130 14:44:43.640347 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:44:54 crc kubenswrapper[4793]: I0130 14:44:54.398822 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:44:54 crc kubenswrapper[4793]: E0130 14:44:54.399599 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.318110 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r"] Jan 30 14:45:00 crc kubenswrapper[4793]: E0130 14:45:00.319110 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="extract-content" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.319126 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="extract-content" Jan 30 14:45:00 crc kubenswrapper[4793]: E0130 14:45:00.319136 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="registry-server" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.319143 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="registry-server" Jan 30 14:45:00 crc kubenswrapper[4793]: E0130 14:45:00.319152 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="extract-utilities" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.319160 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="extract-utilities" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.319328 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaf6755c-f96b-44cd-a05b-10f4420c18b8" containerName="registry-server" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.319948 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.321685 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.323111 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.328843 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r"] Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.446751 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkzl7\" (UniqueName: \"kubernetes.io/projected/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-kube-api-access-wkzl7\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.446854 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-config-volume\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.447171 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-secret-volume\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.548822 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkzl7\" (UniqueName: \"kubernetes.io/projected/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-kube-api-access-wkzl7\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.548902 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-config-volume\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.549003 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-secret-volume\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.550974 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-config-volume\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.555759 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-secret-volume\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.567267 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkzl7\" (UniqueName: \"kubernetes.io/projected/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-kube-api-access-wkzl7\") pod \"collect-profiles-29496405-ttc5r\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:00 crc kubenswrapper[4793]: I0130 14:45:00.644397 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:01 crc kubenswrapper[4793]: I0130 14:45:01.139133 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r"] Jan 30 14:45:01 crc kubenswrapper[4793]: I0130 14:45:01.796035 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" event={"ID":"1c63ff2c-cb24-48c2-9af7-05d299d8b36a","Type":"ContainerStarted","Data":"2bb7033c2b6902fe7f3fb960e4da2010748828c26715bef2cd982381fe406b45"} Jan 30 14:45:01 crc kubenswrapper[4793]: I0130 14:45:01.796456 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" event={"ID":"1c63ff2c-cb24-48c2-9af7-05d299d8b36a","Type":"ContainerStarted","Data":"8399c4ec038355d07dc866d370901380876d74943e2335ba1ab215513cac63aa"} Jan 30 14:45:01 crc kubenswrapper[4793]: I0130 14:45:01.816646 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" podStartSLOduration=1.816626997 podStartE2EDuration="1.816626997s" podCreationTimestamp="2026-01-30 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:45:01.816540035 +0000 UTC m=+3712.517888526" watchObservedRunningTime="2026-01-30 14:45:01.816626997 +0000 UTC m=+3712.517975488" Jan 30 14:45:02 crc kubenswrapper[4793]: I0130 14:45:02.807414 4793 generic.go:334] "Generic (PLEG): container finished" podID="1c63ff2c-cb24-48c2-9af7-05d299d8b36a" containerID="2bb7033c2b6902fe7f3fb960e4da2010748828c26715bef2cd982381fe406b45" exitCode=0 Jan 30 14:45:02 crc kubenswrapper[4793]: I0130 14:45:02.807469 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" event={"ID":"1c63ff2c-cb24-48c2-9af7-05d299d8b36a","Type":"ContainerDied","Data":"2bb7033c2b6902fe7f3fb960e4da2010748828c26715bef2cd982381fe406b45"} Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.289902 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.424640 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-secret-volume\") pod \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.424801 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-config-volume\") pod \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.424856 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkzl7\" (UniqueName: \"kubernetes.io/projected/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-kube-api-access-wkzl7\") pod \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\" (UID: \"1c63ff2c-cb24-48c2-9af7-05d299d8b36a\") " Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.425534 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-config-volume" (OuterVolumeSpecName: "config-volume") pod "1c63ff2c-cb24-48c2-9af7-05d299d8b36a" (UID: "1c63ff2c-cb24-48c2-9af7-05d299d8b36a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.432181 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-kube-api-access-wkzl7" (OuterVolumeSpecName: "kube-api-access-wkzl7") pod "1c63ff2c-cb24-48c2-9af7-05d299d8b36a" (UID: "1c63ff2c-cb24-48c2-9af7-05d299d8b36a"). InnerVolumeSpecName "kube-api-access-wkzl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.434376 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1c63ff2c-cb24-48c2-9af7-05d299d8b36a" (UID: "1c63ff2c-cb24-48c2-9af7-05d299d8b36a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.526908 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.526950 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkzl7\" (UniqueName: \"kubernetes.io/projected/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-kube-api-access-wkzl7\") on node \"crc\" DevicePath \"\"" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.526966 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c63ff2c-cb24-48c2-9af7-05d299d8b36a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.825663 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" event={"ID":"1c63ff2c-cb24-48c2-9af7-05d299d8b36a","Type":"ContainerDied","Data":"8399c4ec038355d07dc866d370901380876d74943e2335ba1ab215513cac63aa"} Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.825880 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8399c4ec038355d07dc866d370901380876d74943e2335ba1ab215513cac63aa" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.825771 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r" Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.925382 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk"] Jan 30 14:45:04 crc kubenswrapper[4793]: I0130 14:45:04.940429 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-gwpwk"] Jan 30 14:45:06 crc kubenswrapper[4793]: I0130 14:45:06.408334 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0262a970-62b2-47c1-93bf-1e4455a999bf" path="/var/lib/kubelet/pods/0262a970-62b2-47c1-93bf-1e4455a999bf/volumes" Jan 30 14:45:07 crc kubenswrapper[4793]: I0130 14:45:07.398863 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:45:07 crc kubenswrapper[4793]: E0130 14:45:07.399447 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:45:20 crc kubenswrapper[4793]: I0130 14:45:20.404269 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:45:20 crc kubenswrapper[4793]: E0130 14:45:20.404962 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:45:34 crc kubenswrapper[4793]: I0130 14:45:34.398815 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:45:34 crc kubenswrapper[4793]: E0130 14:45:34.399584 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:45:45 crc kubenswrapper[4793]: I0130 14:45:45.398947 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:45:45 crc kubenswrapper[4793]: E0130 14:45:45.399824 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:45:57 crc kubenswrapper[4793]: I0130 14:45:57.398719 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:45:57 crc kubenswrapper[4793]: E0130 14:45:57.399499 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:45:59 crc kubenswrapper[4793]: I0130 14:45:59.362111 4793 scope.go:117] "RemoveContainer" containerID="21efee8d4521693281692f27a68228834ba45b6ab82173ff835a52b2e30855b1" Jan 30 14:46:10 crc kubenswrapper[4793]: I0130 14:46:10.406131 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:46:10 crc kubenswrapper[4793]: E0130 14:46:10.407258 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:46:25 crc kubenswrapper[4793]: I0130 14:46:25.398116 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:46:25 crc kubenswrapper[4793]: E0130 14:46:25.398713 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:46:39 crc kubenswrapper[4793]: I0130 14:46:39.398395 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:46:39 crc kubenswrapper[4793]: E0130 14:46:39.399208 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:46:54 crc kubenswrapper[4793]: I0130 14:46:54.402037 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:46:54 crc kubenswrapper[4793]: E0130 14:46:54.404464 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:47:07 crc kubenswrapper[4793]: I0130 14:47:07.398085 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:47:07 crc kubenswrapper[4793]: E0130 14:47:07.398904 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:47:20 crc kubenswrapper[4793]: I0130 14:47:20.406628 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:47:20 crc kubenswrapper[4793]: E0130 14:47:20.407773 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:47:32 crc kubenswrapper[4793]: I0130 14:47:32.399040 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:47:32 crc kubenswrapper[4793]: E0130 14:47:32.399847 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:47:47 crc kubenswrapper[4793]: I0130 14:47:47.398975 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:47:47 crc kubenswrapper[4793]: E0130 14:47:47.400372 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:47:58 crc kubenswrapper[4793]: I0130 14:47:58.398388 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:47:58 crc kubenswrapper[4793]: E0130 14:47:58.399101 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:48:12 crc kubenswrapper[4793]: I0130 14:48:12.397980 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:48:12 crc kubenswrapper[4793]: E0130 14:48:12.398876 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:48:26 crc kubenswrapper[4793]: I0130 14:48:26.398428 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:48:26 crc kubenswrapper[4793]: E0130 14:48:26.399501 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:48:41 crc kubenswrapper[4793]: I0130 14:48:41.398483 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:48:41 crc kubenswrapper[4793]: E0130 14:48:41.399118 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:48:53 crc kubenswrapper[4793]: I0130 14:48:53.399163 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:48:53 crc kubenswrapper[4793]: E0130 14:48:53.400029 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:49:07 crc kubenswrapper[4793]: I0130 14:49:07.398395 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:49:07 crc kubenswrapper[4793]: E0130 14:49:07.399606 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:49:19 crc kubenswrapper[4793]: I0130 14:49:19.642162 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:49:19 crc kubenswrapper[4793]: E0130 14:49:19.642824 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:49:33 crc kubenswrapper[4793]: I0130 14:49:33.398752 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:49:33 crc kubenswrapper[4793]: E0130 14:49:33.399944 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:49:47 crc kubenswrapper[4793]: I0130 14:49:47.397929 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:49:47 crc kubenswrapper[4793]: I0130 14:49:47.933623 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"cba2547b17c36e42af8677cd2bf7d48cb12f8208373936d3d3c20ac5c406aba2"} Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.448600 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gf56s"] Jan 30 14:49:55 crc kubenswrapper[4793]: E0130 14:49:55.458551 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c63ff2c-cb24-48c2-9af7-05d299d8b36a" containerName="collect-profiles" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.458638 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c63ff2c-cb24-48c2-9af7-05d299d8b36a" containerName="collect-profiles" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.458908 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c63ff2c-cb24-48c2-9af7-05d299d8b36a" containerName="collect-profiles" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.460399 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.464801 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gf56s"] Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.606361 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-utilities\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.606523 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-catalog-content\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.606551 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv2rf\" (UniqueName: \"kubernetes.io/projected/b58c525f-70f3-4640-a57c-9de37b17e01c-kube-api-access-lv2rf\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.708173 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-catalog-content\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.708225 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv2rf\" (UniqueName: \"kubernetes.io/projected/b58c525f-70f3-4640-a57c-9de37b17e01c-kube-api-access-lv2rf\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.708272 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-utilities\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.708690 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-utilities\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.708898 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-catalog-content\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.731940 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv2rf\" (UniqueName: \"kubernetes.io/projected/b58c525f-70f3-4640-a57c-9de37b17e01c-kube-api-access-lv2rf\") pod \"redhat-operators-gf56s\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:55 crc kubenswrapper[4793]: I0130 14:49:55.818357 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:49:56 crc kubenswrapper[4793]: I0130 14:49:56.410651 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gf56s"] Jan 30 14:49:57 crc kubenswrapper[4793]: I0130 14:49:57.012721 4793 generic.go:334] "Generic (PLEG): container finished" podID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerID="42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8" exitCode=0 Jan 30 14:49:57 crc kubenswrapper[4793]: I0130 14:49:57.012893 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf56s" event={"ID":"b58c525f-70f3-4640-a57c-9de37b17e01c","Type":"ContainerDied","Data":"42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8"} Jan 30 14:49:57 crc kubenswrapper[4793]: I0130 14:49:57.013018 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf56s" event={"ID":"b58c525f-70f3-4640-a57c-9de37b17e01c","Type":"ContainerStarted","Data":"eb8aba70dedaa058f3a16e5f14146fe310d30f48bd736ec9df6877aa331a5240"} Jan 30 14:49:57 crc kubenswrapper[4793]: I0130 14:49:57.014946 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:49:59 crc kubenswrapper[4793]: I0130 14:49:59.043852 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf56s" event={"ID":"b58c525f-70f3-4640-a57c-9de37b17e01c","Type":"ContainerStarted","Data":"365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640"} Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.437024 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jlnlv"] Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.439268 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.465291 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlnlv"] Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.501083 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-utilities\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.501347 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-catalog-content\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.501549 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj4bp\" (UniqueName: \"kubernetes.io/projected/4fa1a794-f8b8-400b-b829-57f761da53bf-kube-api-access-mj4bp\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.603363 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj4bp\" (UniqueName: \"kubernetes.io/projected/4fa1a794-f8b8-400b-b829-57f761da53bf-kube-api-access-mj4bp\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.603499 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-utilities\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.603545 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-catalog-content\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.604136 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-catalog-content\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.604469 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-utilities\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.729258 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj4bp\" (UniqueName: \"kubernetes.io/projected/4fa1a794-f8b8-400b-b829-57f761da53bf-kube-api-access-mj4bp\") pod \"redhat-marketplace-jlnlv\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:00 crc kubenswrapper[4793]: I0130 14:50:00.771084 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:01 crc kubenswrapper[4793]: I0130 14:50:01.459521 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlnlv"] Jan 30 14:50:02 crc kubenswrapper[4793]: I0130 14:50:02.075102 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlnlv" event={"ID":"4fa1a794-f8b8-400b-b829-57f761da53bf","Type":"ContainerStarted","Data":"32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c"} Jan 30 14:50:02 crc kubenswrapper[4793]: I0130 14:50:02.075640 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlnlv" event={"ID":"4fa1a794-f8b8-400b-b829-57f761da53bf","Type":"ContainerStarted","Data":"6169685a639926301d571b04cc5d15f21a0a9d940ee376e0840462ee49a612de"} Jan 30 14:50:03 crc kubenswrapper[4793]: I0130 14:50:03.085061 4793 generic.go:334] "Generic (PLEG): container finished" podID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerID="32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c" exitCode=0 Jan 30 14:50:03 crc kubenswrapper[4793]: I0130 14:50:03.085134 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlnlv" event={"ID":"4fa1a794-f8b8-400b-b829-57f761da53bf","Type":"ContainerDied","Data":"32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c"} Jan 30 14:50:05 crc kubenswrapper[4793]: I0130 14:50:05.122890 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlnlv" event={"ID":"4fa1a794-f8b8-400b-b829-57f761da53bf","Type":"ContainerStarted","Data":"5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2"} Jan 30 14:50:07 crc kubenswrapper[4793]: I0130 14:50:07.142839 4793 generic.go:334] "Generic (PLEG): container finished" podID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerID="5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2" exitCode=0 Jan 30 14:50:07 crc kubenswrapper[4793]: I0130 14:50:07.143378 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlnlv" event={"ID":"4fa1a794-f8b8-400b-b829-57f761da53bf","Type":"ContainerDied","Data":"5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2"} Jan 30 14:50:08 crc kubenswrapper[4793]: I0130 14:50:08.828989 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f45b0069-4cb7-4dfd-ac2d-1473cacbde1f" containerName="galera" probeResult="failure" output="command timed out" Jan 30 14:50:08 crc kubenswrapper[4793]: I0130 14:50:08.829236 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="f45b0069-4cb7-4dfd-ac2d-1473cacbde1f" containerName="galera" probeResult="failure" output="command timed out" Jan 30 14:50:12 crc kubenswrapper[4793]: I0130 14:50:12.194563 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlnlv" event={"ID":"4fa1a794-f8b8-400b-b829-57f761da53bf","Type":"ContainerStarted","Data":"6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80"} Jan 30 14:50:12 crc kubenswrapper[4793]: I0130 14:50:12.224843 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jlnlv" podStartSLOduration=4.122386512 podStartE2EDuration="12.224821567s" podCreationTimestamp="2026-01-30 14:50:00 +0000 UTC" firstStartedPulling="2026-01-30 14:50:03.087413177 +0000 UTC m=+4013.788761668" lastFinishedPulling="2026-01-30 14:50:11.189848232 +0000 UTC m=+4021.891196723" observedRunningTime="2026-01-30 14:50:12.220810599 +0000 UTC m=+4022.922159090" watchObservedRunningTime="2026-01-30 14:50:12.224821567 +0000 UTC m=+4022.926170058" Jan 30 14:50:14 crc kubenswrapper[4793]: I0130 14:50:14.217108 4793 generic.go:334] "Generic (PLEG): container finished" podID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerID="365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640" exitCode=0 Jan 30 14:50:14 crc kubenswrapper[4793]: I0130 14:50:14.217182 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf56s" event={"ID":"b58c525f-70f3-4640-a57c-9de37b17e01c","Type":"ContainerDied","Data":"365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640"} Jan 30 14:50:16 crc kubenswrapper[4793]: I0130 14:50:16.240534 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf56s" event={"ID":"b58c525f-70f3-4640-a57c-9de37b17e01c","Type":"ContainerStarted","Data":"fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5"} Jan 30 14:50:16 crc kubenswrapper[4793]: I0130 14:50:16.267104 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gf56s" podStartSLOduration=3.672785386 podStartE2EDuration="21.267084663s" podCreationTimestamp="2026-01-30 14:49:55 +0000 UTC" firstStartedPulling="2026-01-30 14:49:57.014665867 +0000 UTC m=+4007.716014358" lastFinishedPulling="2026-01-30 14:50:14.608965144 +0000 UTC m=+4025.310313635" observedRunningTime="2026-01-30 14:50:16.261021924 +0000 UTC m=+4026.962370425" watchObservedRunningTime="2026-01-30 14:50:16.267084663 +0000 UTC m=+4026.968433154" Jan 30 14:50:20 crc kubenswrapper[4793]: I0130 14:50:20.772248 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:20 crc kubenswrapper[4793]: I0130 14:50:20.772802 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:21 crc kubenswrapper[4793]: I0130 14:50:21.838291 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-jlnlv" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="registry-server" probeResult="failure" output=< Jan 30 14:50:21 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:50:21 crc kubenswrapper[4793]: > Jan 30 14:50:25 crc kubenswrapper[4793]: I0130 14:50:25.819549 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:50:25 crc kubenswrapper[4793]: I0130 14:50:25.820123 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:50:26 crc kubenswrapper[4793]: I0130 14:50:26.881483 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gf56s" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="registry-server" probeResult="failure" output=< Jan 30 14:50:26 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:50:26 crc kubenswrapper[4793]: > Jan 30 14:50:30 crc kubenswrapper[4793]: I0130 14:50:30.826701 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:30 crc kubenswrapper[4793]: I0130 14:50:30.877931 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:31 crc kubenswrapper[4793]: I0130 14:50:31.637030 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlnlv"] Jan 30 14:50:32 crc kubenswrapper[4793]: I0130 14:50:32.361326 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jlnlv" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="registry-server" containerID="cri-o://6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80" gracePeriod=2 Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.245333 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.248425 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-catalog-content\") pod \"4fa1a794-f8b8-400b-b829-57f761da53bf\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.248496 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj4bp\" (UniqueName: \"kubernetes.io/projected/4fa1a794-f8b8-400b-b829-57f761da53bf-kube-api-access-mj4bp\") pod \"4fa1a794-f8b8-400b-b829-57f761da53bf\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.248598 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-utilities\") pod \"4fa1a794-f8b8-400b-b829-57f761da53bf\" (UID: \"4fa1a794-f8b8-400b-b829-57f761da53bf\") " Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.249488 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-utilities" (OuterVolumeSpecName: "utilities") pod "4fa1a794-f8b8-400b-b829-57f761da53bf" (UID: "4fa1a794-f8b8-400b-b829-57f761da53bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.256824 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa1a794-f8b8-400b-b829-57f761da53bf-kube-api-access-mj4bp" (OuterVolumeSpecName: "kube-api-access-mj4bp") pod "4fa1a794-f8b8-400b-b829-57f761da53bf" (UID: "4fa1a794-f8b8-400b-b829-57f761da53bf"). InnerVolumeSpecName "kube-api-access-mj4bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.291514 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fa1a794-f8b8-400b-b829-57f761da53bf" (UID: "4fa1a794-f8b8-400b-b829-57f761da53bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.350699 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.350743 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj4bp\" (UniqueName: \"kubernetes.io/projected/4fa1a794-f8b8-400b-b829-57f761da53bf-kube-api-access-mj4bp\") on node \"crc\" DevicePath \"\"" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.350781 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1a794-f8b8-400b-b829-57f761da53bf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.383471 4793 generic.go:334] "Generic (PLEG): container finished" podID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerID="6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80" exitCode=0 Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.383528 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlnlv" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.383550 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlnlv" event={"ID":"4fa1a794-f8b8-400b-b829-57f761da53bf","Type":"ContainerDied","Data":"6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80"} Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.384472 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlnlv" event={"ID":"4fa1a794-f8b8-400b-b829-57f761da53bf","Type":"ContainerDied","Data":"6169685a639926301d571b04cc5d15f21a0a9d940ee376e0840462ee49a612de"} Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.384496 4793 scope.go:117] "RemoveContainer" containerID="6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.410879 4793 scope.go:117] "RemoveContainer" containerID="5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.434130 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlnlv"] Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.442956 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlnlv"] Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.495652 4793 scope.go:117] "RemoveContainer" containerID="32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.637018 4793 scope.go:117] "RemoveContainer" containerID="6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80" Jan 30 14:50:33 crc kubenswrapper[4793]: E0130 14:50:33.637517 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80\": container with ID starting with 6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80 not found: ID does not exist" containerID="6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.637547 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80"} err="failed to get container status \"6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80\": rpc error: code = NotFound desc = could not find container \"6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80\": container with ID starting with 6f67484780b7d6f285b4e56c4342d5f7e4e45dcf80f4bb349829e1b9923d6c80 not found: ID does not exist" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.637569 4793 scope.go:117] "RemoveContainer" containerID="5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2" Jan 30 14:50:33 crc kubenswrapper[4793]: E0130 14:50:33.637969 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2\": container with ID starting with 5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2 not found: ID does not exist" containerID="5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.638020 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2"} err="failed to get container status \"5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2\": rpc error: code = NotFound desc = could not find container \"5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2\": container with ID starting with 5d0c7a6506c8d12c028efc75046bf4f6faa6d1eedc01feeb90fc1b2d915738f2 not found: ID does not exist" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.638072 4793 scope.go:117] "RemoveContainer" containerID="32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c" Jan 30 14:50:33 crc kubenswrapper[4793]: E0130 14:50:33.638539 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c\": container with ID starting with 32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c not found: ID does not exist" containerID="32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c" Jan 30 14:50:33 crc kubenswrapper[4793]: I0130 14:50:33.638568 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c"} err="failed to get container status \"32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c\": rpc error: code = NotFound desc = could not find container \"32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c\": container with ID starting with 32c77e630bc3f75df71eb0903530c9b7649c4c8d88fb58b91092e9ba21fd992c not found: ID does not exist" Jan 30 14:50:34 crc kubenswrapper[4793]: I0130 14:50:34.409343 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" path="/var/lib/kubelet/pods/4fa1a794-f8b8-400b-b829-57f761da53bf/volumes" Jan 30 14:50:36 crc kubenswrapper[4793]: I0130 14:50:36.873856 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gf56s" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="registry-server" probeResult="failure" output=< Jan 30 14:50:36 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:50:36 crc kubenswrapper[4793]: > Jan 30 14:50:46 crc kubenswrapper[4793]: I0130 14:50:46.876602 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gf56s" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="registry-server" probeResult="failure" output=< Jan 30 14:50:46 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 14:50:46 crc kubenswrapper[4793]: > Jan 30 14:50:55 crc kubenswrapper[4793]: I0130 14:50:55.876665 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:50:55 crc kubenswrapper[4793]: I0130 14:50:55.957854 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:50:56 crc kubenswrapper[4793]: I0130 14:50:56.685551 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gf56s"] Jan 30 14:50:57 crc kubenswrapper[4793]: I0130 14:50:57.602604 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gf56s" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="registry-server" containerID="cri-o://fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5" gracePeriod=2 Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.330366 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.461509 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv2rf\" (UniqueName: \"kubernetes.io/projected/b58c525f-70f3-4640-a57c-9de37b17e01c-kube-api-access-lv2rf\") pod \"b58c525f-70f3-4640-a57c-9de37b17e01c\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.461629 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-utilities\") pod \"b58c525f-70f3-4640-a57c-9de37b17e01c\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.461792 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-catalog-content\") pod \"b58c525f-70f3-4640-a57c-9de37b17e01c\" (UID: \"b58c525f-70f3-4640-a57c-9de37b17e01c\") " Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.470605 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-utilities" (OuterVolumeSpecName: "utilities") pod "b58c525f-70f3-4640-a57c-9de37b17e01c" (UID: "b58c525f-70f3-4640-a57c-9de37b17e01c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.471383 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.478581 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b58c525f-70f3-4640-a57c-9de37b17e01c-kube-api-access-lv2rf" (OuterVolumeSpecName: "kube-api-access-lv2rf") pod "b58c525f-70f3-4640-a57c-9de37b17e01c" (UID: "b58c525f-70f3-4640-a57c-9de37b17e01c"). InnerVolumeSpecName "kube-api-access-lv2rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.573174 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv2rf\" (UniqueName: \"kubernetes.io/projected/b58c525f-70f3-4640-a57c-9de37b17e01c-kube-api-access-lv2rf\") on node \"crc\" DevicePath \"\"" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.597695 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b58c525f-70f3-4640-a57c-9de37b17e01c" (UID: "b58c525f-70f3-4640-a57c-9de37b17e01c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.616301 4793 generic.go:334] "Generic (PLEG): container finished" podID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerID="fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5" exitCode=0 Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.616347 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf56s" event={"ID":"b58c525f-70f3-4640-a57c-9de37b17e01c","Type":"ContainerDied","Data":"fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5"} Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.616377 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf56s" event={"ID":"b58c525f-70f3-4640-a57c-9de37b17e01c","Type":"ContainerDied","Data":"eb8aba70dedaa058f3a16e5f14146fe310d30f48bd736ec9df6877aa331a5240"} Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.616395 4793 scope.go:117] "RemoveContainer" containerID="fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.616524 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gf56s" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.657901 4793 scope.go:117] "RemoveContainer" containerID="365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.661136 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gf56s"] Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.672298 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gf56s"] Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.675741 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b58c525f-70f3-4640-a57c-9de37b17e01c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.686400 4793 scope.go:117] "RemoveContainer" containerID="42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.719990 4793 scope.go:117] "RemoveContainer" containerID="fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5" Jan 30 14:50:58 crc kubenswrapper[4793]: E0130 14:50:58.721198 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5\": container with ID starting with fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5 not found: ID does not exist" containerID="fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.721232 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5"} err="failed to get container status \"fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5\": rpc error: code = NotFound desc = could not find container \"fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5\": container with ID starting with fec12531fa062ce63da8e66f6cab2e00bac602644f87be1624685fea5bc518f5 not found: ID does not exist" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.721271 4793 scope.go:117] "RemoveContainer" containerID="365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640" Jan 30 14:50:58 crc kubenswrapper[4793]: E0130 14:50:58.721732 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640\": container with ID starting with 365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640 not found: ID does not exist" containerID="365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.721781 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640"} err="failed to get container status \"365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640\": rpc error: code = NotFound desc = could not find container \"365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640\": container with ID starting with 365c91cddcefe944a967b97d82bf8aac95a8a8ed075036be7d14604a762f0640 not found: ID does not exist" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.721812 4793 scope.go:117] "RemoveContainer" containerID="42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8" Jan 30 14:50:58 crc kubenswrapper[4793]: E0130 14:50:58.723370 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8\": container with ID starting with 42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8 not found: ID does not exist" containerID="42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8" Jan 30 14:50:58 crc kubenswrapper[4793]: I0130 14:50:58.723415 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8"} err="failed to get container status \"42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8\": rpc error: code = NotFound desc = could not find container \"42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8\": container with ID starting with 42dff34c81ecee4a482c17d20ec075468af95973848fb31f04ca0a09a48f4dc8 not found: ID does not exist" Jan 30 14:51:00 crc kubenswrapper[4793]: I0130 14:51:00.409561 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" path="/var/lib/kubelet/pods/b58c525f-70f3-4640-a57c-9de37b17e01c/volumes" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.229475 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r6cbd"] Jan 30 14:52:04 crc kubenswrapper[4793]: E0130 14:52:04.230411 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="extract-content" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.230428 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="extract-content" Jan 30 14:52:04 crc kubenswrapper[4793]: E0130 14:52:04.230444 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="extract-utilities" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.230451 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="extract-utilities" Jan 30 14:52:04 crc kubenswrapper[4793]: E0130 14:52:04.230472 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="registry-server" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.230479 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="registry-server" Jan 30 14:52:04 crc kubenswrapper[4793]: E0130 14:52:04.230497 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="extract-content" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.230504 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="extract-content" Jan 30 14:52:04 crc kubenswrapper[4793]: E0130 14:52:04.230516 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="extract-utilities" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.230524 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="extract-utilities" Jan 30 14:52:04 crc kubenswrapper[4793]: E0130 14:52:04.230532 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="registry-server" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.230538 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="registry-server" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.230760 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa1a794-f8b8-400b-b829-57f761da53bf" containerName="registry-server" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.230783 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b58c525f-70f3-4640-a57c-9de37b17e01c" containerName="registry-server" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.232638 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.250875 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r6cbd"] Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.370537 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-catalog-content\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.370764 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2pnj\" (UniqueName: \"kubernetes.io/projected/e7b63510-a909-4a19-83a9-7aeeae35c681-kube-api-access-m2pnj\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.370837 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-utilities\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.473214 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-catalog-content\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.473339 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2pnj\" (UniqueName: \"kubernetes.io/projected/e7b63510-a909-4a19-83a9-7aeeae35c681-kube-api-access-m2pnj\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.473371 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-utilities\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.473745 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-catalog-content\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.473934 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-utilities\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.494820 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2pnj\" (UniqueName: \"kubernetes.io/projected/e7b63510-a909-4a19-83a9-7aeeae35c681-kube-api-access-m2pnj\") pod \"certified-operators-r6cbd\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:04 crc kubenswrapper[4793]: I0130 14:52:04.555701 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:05 crc kubenswrapper[4793]: I0130 14:52:05.198104 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r6cbd"] Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.194138 4793 generic.go:334] "Generic (PLEG): container finished" podID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerID="7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce" exitCode=0 Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.194190 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6cbd" event={"ID":"e7b63510-a909-4a19-83a9-7aeeae35c681","Type":"ContainerDied","Data":"7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce"} Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.195580 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6cbd" event={"ID":"e7b63510-a909-4a19-83a9-7aeeae35c681","Type":"ContainerStarted","Data":"7d281de4bd80a47645e1191b1a907101005c1f6da7441fccffb894aceeed7a41"} Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.233961 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nc58f"] Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.240626 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.286491 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nc58f"] Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.315304 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-utilities\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.315456 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-catalog-content\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.315631 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx2bn\" (UniqueName: \"kubernetes.io/projected/53fa7ee2-40c6-42b2-83e7-91560b4ae614-kube-api-access-bx2bn\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.418286 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-utilities\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.418411 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-catalog-content\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.418453 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx2bn\" (UniqueName: \"kubernetes.io/projected/53fa7ee2-40c6-42b2-83e7-91560b4ae614-kube-api-access-bx2bn\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.418973 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-utilities\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.419024 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-catalog-content\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.442824 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx2bn\" (UniqueName: \"kubernetes.io/projected/53fa7ee2-40c6-42b2-83e7-91560b4ae614-kube-api-access-bx2bn\") pod \"community-operators-nc58f\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:06 crc kubenswrapper[4793]: I0130 14:52:06.570448 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:07 crc kubenswrapper[4793]: I0130 14:52:07.231758 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nc58f"] Jan 30 14:52:08 crc kubenswrapper[4793]: I0130 14:52:08.219788 4793 generic.go:334] "Generic (PLEG): container finished" podID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerID="9797cc08d205a357f7341259f74f234a05068c9223b29e62a420e0ce3c9ec65f" exitCode=0 Jan 30 14:52:08 crc kubenswrapper[4793]: I0130 14:52:08.219919 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc58f" event={"ID":"53fa7ee2-40c6-42b2-83e7-91560b4ae614","Type":"ContainerDied","Data":"9797cc08d205a357f7341259f74f234a05068c9223b29e62a420e0ce3c9ec65f"} Jan 30 14:52:08 crc kubenswrapper[4793]: I0130 14:52:08.220294 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc58f" event={"ID":"53fa7ee2-40c6-42b2-83e7-91560b4ae614","Type":"ContainerStarted","Data":"7b8fab036f2c800bfde40ab7395dabfb3875fce049341b6a53bcba807f11ac44"} Jan 30 14:52:08 crc kubenswrapper[4793]: I0130 14:52:08.226192 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6cbd" event={"ID":"e7b63510-a909-4a19-83a9-7aeeae35c681","Type":"ContainerStarted","Data":"a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb"} Jan 30 14:52:10 crc kubenswrapper[4793]: I0130 14:52:10.260075 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc58f" event={"ID":"53fa7ee2-40c6-42b2-83e7-91560b4ae614","Type":"ContainerStarted","Data":"4fac99a830596ab4c8ccd92b20b16f13dd985af78b405e7b37963e7f8429ddf5"} Jan 30 14:52:10 crc kubenswrapper[4793]: I0130 14:52:10.262623 4793 generic.go:334] "Generic (PLEG): container finished" podID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerID="a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb" exitCode=0 Jan 30 14:52:10 crc kubenswrapper[4793]: I0130 14:52:10.262679 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6cbd" event={"ID":"e7b63510-a909-4a19-83a9-7aeeae35c681","Type":"ContainerDied","Data":"a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb"} Jan 30 14:52:12 crc kubenswrapper[4793]: I0130 14:52:12.284849 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6cbd" event={"ID":"e7b63510-a909-4a19-83a9-7aeeae35c681","Type":"ContainerStarted","Data":"7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435"} Jan 30 14:52:12 crc kubenswrapper[4793]: I0130 14:52:12.304705 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r6cbd" podStartSLOduration=2.462898813 podStartE2EDuration="8.304686156s" podCreationTimestamp="2026-01-30 14:52:04 +0000 UTC" firstStartedPulling="2026-01-30 14:52:06.196885348 +0000 UTC m=+4136.898233829" lastFinishedPulling="2026-01-30 14:52:12.038672671 +0000 UTC m=+4142.740021172" observedRunningTime="2026-01-30 14:52:12.301841586 +0000 UTC m=+4143.003190097" watchObservedRunningTime="2026-01-30 14:52:12.304686156 +0000 UTC m=+4143.006034647" Jan 30 14:52:12 crc kubenswrapper[4793]: I0130 14:52:12.413780 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:52:12 crc kubenswrapper[4793]: I0130 14:52:12.413854 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:52:14 crc kubenswrapper[4793]: I0130 14:52:14.555883 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:14 crc kubenswrapper[4793]: I0130 14:52:14.556152 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:14 crc kubenswrapper[4793]: I0130 14:52:14.612391 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:17 crc kubenswrapper[4793]: I0130 14:52:17.337109 4793 generic.go:334] "Generic (PLEG): container finished" podID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerID="4fac99a830596ab4c8ccd92b20b16f13dd985af78b405e7b37963e7f8429ddf5" exitCode=0 Jan 30 14:52:17 crc kubenswrapper[4793]: I0130 14:52:17.337477 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc58f" event={"ID":"53fa7ee2-40c6-42b2-83e7-91560b4ae614","Type":"ContainerDied","Data":"4fac99a830596ab4c8ccd92b20b16f13dd985af78b405e7b37963e7f8429ddf5"} Jan 30 14:52:18 crc kubenswrapper[4793]: I0130 14:52:18.364563 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc58f" event={"ID":"53fa7ee2-40c6-42b2-83e7-91560b4ae614","Type":"ContainerStarted","Data":"8483acdb27d6a9e9c65f4dd466fd68c3f03a2b90fd7995dcc8394d42f7515fb8"} Jan 30 14:52:18 crc kubenswrapper[4793]: I0130 14:52:18.410121 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nc58f" podStartSLOduration=2.856833559 podStartE2EDuration="12.410100164s" podCreationTimestamp="2026-01-30 14:52:06 +0000 UTC" firstStartedPulling="2026-01-30 14:52:08.225244568 +0000 UTC m=+4138.926593059" lastFinishedPulling="2026-01-30 14:52:17.778511173 +0000 UTC m=+4148.479859664" observedRunningTime="2026-01-30 14:52:18.395331642 +0000 UTC m=+4149.096680133" watchObservedRunningTime="2026-01-30 14:52:18.410100164 +0000 UTC m=+4149.111448655" Jan 30 14:52:24 crc kubenswrapper[4793]: I0130 14:52:24.609274 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:26 crc kubenswrapper[4793]: I0130 14:52:26.570665 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:26 crc kubenswrapper[4793]: I0130 14:52:26.576179 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:26 crc kubenswrapper[4793]: I0130 14:52:26.631833 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:27 crc kubenswrapper[4793]: I0130 14:52:27.503920 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:27 crc kubenswrapper[4793]: I0130 14:52:27.586621 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r6cbd"] Jan 30 14:52:27 crc kubenswrapper[4793]: I0130 14:52:27.586889 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r6cbd" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerName="registry-server" containerID="cri-o://7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435" gracePeriod=2 Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.458984 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.460409 4793 generic.go:334] "Generic (PLEG): container finished" podID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerID="7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435" exitCode=0 Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.460464 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6cbd" event={"ID":"e7b63510-a909-4a19-83a9-7aeeae35c681","Type":"ContainerDied","Data":"7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435"} Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.461715 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6cbd" event={"ID":"e7b63510-a909-4a19-83a9-7aeeae35c681","Type":"ContainerDied","Data":"7d281de4bd80a47645e1191b1a907101005c1f6da7441fccffb894aceeed7a41"} Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.461737 4793 scope.go:117] "RemoveContainer" containerID="7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.482586 4793 scope.go:117] "RemoveContainer" containerID="a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.543173 4793 scope.go:117] "RemoveContainer" containerID="7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.561770 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-catalog-content\") pod \"e7b63510-a909-4a19-83a9-7aeeae35c681\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.561908 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2pnj\" (UniqueName: \"kubernetes.io/projected/e7b63510-a909-4a19-83a9-7aeeae35c681-kube-api-access-m2pnj\") pod \"e7b63510-a909-4a19-83a9-7aeeae35c681\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.561944 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-utilities\") pod \"e7b63510-a909-4a19-83a9-7aeeae35c681\" (UID: \"e7b63510-a909-4a19-83a9-7aeeae35c681\") " Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.562862 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-utilities" (OuterVolumeSpecName: "utilities") pod "e7b63510-a909-4a19-83a9-7aeeae35c681" (UID: "e7b63510-a909-4a19-83a9-7aeeae35c681"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.577895 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7b63510-a909-4a19-83a9-7aeeae35c681-kube-api-access-m2pnj" (OuterVolumeSpecName: "kube-api-access-m2pnj") pod "e7b63510-a909-4a19-83a9-7aeeae35c681" (UID: "e7b63510-a909-4a19-83a9-7aeeae35c681"). InnerVolumeSpecName "kube-api-access-m2pnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.586955 4793 scope.go:117] "RemoveContainer" containerID="7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435" Jan 30 14:52:28 crc kubenswrapper[4793]: E0130 14:52:28.588288 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435\": container with ID starting with 7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435 not found: ID does not exist" containerID="7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.588341 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435"} err="failed to get container status \"7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435\": rpc error: code = NotFound desc = could not find container \"7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435\": container with ID starting with 7c700b6712a76ae100b0f8194ed6cf34280572ccb922cf5f9634ef80677bb435 not found: ID does not exist" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.588361 4793 scope.go:117] "RemoveContainer" containerID="a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb" Jan 30 14:52:28 crc kubenswrapper[4793]: E0130 14:52:28.588723 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb\": container with ID starting with a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb not found: ID does not exist" containerID="a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.588756 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb"} err="failed to get container status \"a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb\": rpc error: code = NotFound desc = could not find container \"a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb\": container with ID starting with a201083a54daffef8cd068e75bbe050dbe896bc5f71c65d2d09976c2fd05fdfb not found: ID does not exist" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.588781 4793 scope.go:117] "RemoveContainer" containerID="7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce" Jan 30 14:52:28 crc kubenswrapper[4793]: E0130 14:52:28.588992 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce\": container with ID starting with 7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce not found: ID does not exist" containerID="7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.589009 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce"} err="failed to get container status \"7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce\": rpc error: code = NotFound desc = could not find container \"7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce\": container with ID starting with 7a5bbb74377130c9222b821e2020b290c31cc4c1e875c5d6725785602d5543ce not found: ID does not exist" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.618586 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7b63510-a909-4a19-83a9-7aeeae35c681" (UID: "e7b63510-a909-4a19-83a9-7aeeae35c681"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.664671 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.664725 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2pnj\" (UniqueName: \"kubernetes.io/projected/e7b63510-a909-4a19-83a9-7aeeae35c681-kube-api-access-m2pnj\") on node \"crc\" DevicePath \"\"" Jan 30 14:52:28 crc kubenswrapper[4793]: I0130 14:52:28.664743 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b63510-a909-4a19-83a9-7aeeae35c681-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:52:29 crc kubenswrapper[4793]: I0130 14:52:29.471355 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6cbd" Jan 30 14:52:29 crc kubenswrapper[4793]: I0130 14:52:29.525074 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r6cbd"] Jan 30 14:52:29 crc kubenswrapper[4793]: I0130 14:52:29.535305 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r6cbd"] Jan 30 14:52:30 crc kubenswrapper[4793]: I0130 14:52:30.172158 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nc58f"] Jan 30 14:52:30 crc kubenswrapper[4793]: I0130 14:52:30.410669 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" path="/var/lib/kubelet/pods/e7b63510-a909-4a19-83a9-7aeeae35c681/volumes" Jan 30 14:52:30 crc kubenswrapper[4793]: I0130 14:52:30.480196 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nc58f" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerName="registry-server" containerID="cri-o://8483acdb27d6a9e9c65f4dd466fd68c3f03a2b90fd7995dcc8394d42f7515fb8" gracePeriod=2 Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.508929 4793 generic.go:334] "Generic (PLEG): container finished" podID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerID="8483acdb27d6a9e9c65f4dd466fd68c3f03a2b90fd7995dcc8394d42f7515fb8" exitCode=0 Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.509002 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc58f" event={"ID":"53fa7ee2-40c6-42b2-83e7-91560b4ae614","Type":"ContainerDied","Data":"8483acdb27d6a9e9c65f4dd466fd68c3f03a2b90fd7995dcc8394d42f7515fb8"} Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.509299 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc58f" event={"ID":"53fa7ee2-40c6-42b2-83e7-91560b4ae614","Type":"ContainerDied","Data":"7b8fab036f2c800bfde40ab7395dabfb3875fce049341b6a53bcba807f11ac44"} Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.509319 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b8fab036f2c800bfde40ab7395dabfb3875fce049341b6a53bcba807f11ac44" Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.525187 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.656846 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-utilities\") pod \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.656963 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-catalog-content\") pod \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.657007 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx2bn\" (UniqueName: \"kubernetes.io/projected/53fa7ee2-40c6-42b2-83e7-91560b4ae614-kube-api-access-bx2bn\") pod \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\" (UID: \"53fa7ee2-40c6-42b2-83e7-91560b4ae614\") " Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.660902 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-utilities" (OuterVolumeSpecName: "utilities") pod "53fa7ee2-40c6-42b2-83e7-91560b4ae614" (UID: "53fa7ee2-40c6-42b2-83e7-91560b4ae614"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.664242 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53fa7ee2-40c6-42b2-83e7-91560b4ae614-kube-api-access-bx2bn" (OuterVolumeSpecName: "kube-api-access-bx2bn") pod "53fa7ee2-40c6-42b2-83e7-91560b4ae614" (UID: "53fa7ee2-40c6-42b2-83e7-91560b4ae614"). InnerVolumeSpecName "kube-api-access-bx2bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.715094 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53fa7ee2-40c6-42b2-83e7-91560b4ae614" (UID: "53fa7ee2-40c6-42b2-83e7-91560b4ae614"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.760019 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.760071 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53fa7ee2-40c6-42b2-83e7-91560b4ae614-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:52:31 crc kubenswrapper[4793]: I0130 14:52:31.760084 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx2bn\" (UniqueName: \"kubernetes.io/projected/53fa7ee2-40c6-42b2-83e7-91560b4ae614-kube-api-access-bx2bn\") on node \"crc\" DevicePath \"\"" Jan 30 14:52:32 crc kubenswrapper[4793]: I0130 14:52:32.517116 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc58f" Jan 30 14:52:32 crc kubenswrapper[4793]: I0130 14:52:32.540262 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nc58f"] Jan 30 14:52:32 crc kubenswrapper[4793]: I0130 14:52:32.550021 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nc58f"] Jan 30 14:52:34 crc kubenswrapper[4793]: I0130 14:52:34.408914 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" path="/var/lib/kubelet/pods/53fa7ee2-40c6-42b2-83e7-91560b4ae614/volumes" Jan 30 14:52:42 crc kubenswrapper[4793]: I0130 14:52:42.414241 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:52:42 crc kubenswrapper[4793]: I0130 14:52:42.414768 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.413715 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.414362 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.414416 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.415248 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cba2547b17c36e42af8677cd2bf7d48cb12f8208373936d3d3c20ac5c406aba2"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.415351 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://cba2547b17c36e42af8677cd2bf7d48cb12f8208373936d3d3c20ac5c406aba2" gracePeriod=600 Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.870537 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="cba2547b17c36e42af8677cd2bf7d48cb12f8208373936d3d3c20ac5c406aba2" exitCode=0 Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.870815 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"cba2547b17c36e42af8677cd2bf7d48cb12f8208373936d3d3c20ac5c406aba2"} Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.870841 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552"} Jan 30 14:53:12 crc kubenswrapper[4793]: I0130 14:53:12.870857 4793 scope.go:117] "RemoveContainer" containerID="c846e8000c855fed39fe7fbe759f1d0372085ff2ca230c4c07696315bd614716" Jan 30 14:55:12 crc kubenswrapper[4793]: I0130 14:55:12.413755 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:55:12 crc kubenswrapper[4793]: I0130 14:55:12.414367 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:55:42 crc kubenswrapper[4793]: I0130 14:55:42.413972 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:55:42 crc kubenswrapper[4793]: I0130 14:55:42.414552 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:56:12 crc kubenswrapper[4793]: I0130 14:56:12.414230 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:56:12 crc kubenswrapper[4793]: I0130 14:56:12.414786 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:56:12 crc kubenswrapper[4793]: I0130 14:56:12.414826 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 14:56:12 crc kubenswrapper[4793]: I0130 14:56:12.415628 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:56:12 crc kubenswrapper[4793]: I0130 14:56:12.415696 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" gracePeriod=600 Jan 30 14:56:12 crc kubenswrapper[4793]: E0130 14:56:12.623661 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:56:13 crc kubenswrapper[4793]: I0130 14:56:13.499170 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" exitCode=0 Jan 30 14:56:13 crc kubenswrapper[4793]: I0130 14:56:13.499238 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552"} Jan 30 14:56:13 crc kubenswrapper[4793]: I0130 14:56:13.500210 4793 scope.go:117] "RemoveContainer" containerID="cba2547b17c36e42af8677cd2bf7d48cb12f8208373936d3d3c20ac5c406aba2" Jan 30 14:56:13 crc kubenswrapper[4793]: I0130 14:56:13.501103 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:56:13 crc kubenswrapper[4793]: E0130 14:56:13.501382 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:56:25 crc kubenswrapper[4793]: I0130 14:56:25.398297 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:56:25 crc kubenswrapper[4793]: E0130 14:56:25.399087 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:56:38 crc kubenswrapper[4793]: I0130 14:56:38.400323 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:56:38 crc kubenswrapper[4793]: E0130 14:56:38.401193 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:56:51 crc kubenswrapper[4793]: I0130 14:56:51.398397 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:56:51 crc kubenswrapper[4793]: E0130 14:56:51.399318 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:57:02 crc kubenswrapper[4793]: I0130 14:57:02.398322 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:57:02 crc kubenswrapper[4793]: E0130 14:57:02.399037 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:57:16 crc kubenswrapper[4793]: I0130 14:57:16.398142 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:57:16 crc kubenswrapper[4793]: E0130 14:57:16.398853 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:57:30 crc kubenswrapper[4793]: I0130 14:57:30.408065 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:57:30 crc kubenswrapper[4793]: E0130 14:57:30.408861 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:57:43 crc kubenswrapper[4793]: I0130 14:57:43.398764 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:57:43 crc kubenswrapper[4793]: E0130 14:57:43.399776 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:57:55 crc kubenswrapper[4793]: I0130 14:57:55.397905 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:57:55 crc kubenswrapper[4793]: E0130 14:57:55.398576 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:58:10 crc kubenswrapper[4793]: I0130 14:58:10.415937 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:58:10 crc kubenswrapper[4793]: E0130 14:58:10.416815 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:58:24 crc kubenswrapper[4793]: I0130 14:58:24.398449 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:58:24 crc kubenswrapper[4793]: E0130 14:58:24.399284 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:58:39 crc kubenswrapper[4793]: I0130 14:58:39.398813 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:58:39 crc kubenswrapper[4793]: E0130 14:58:39.400735 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:58:53 crc kubenswrapper[4793]: I0130 14:58:53.398908 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:58:53 crc kubenswrapper[4793]: E0130 14:58:53.399661 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:58:59 crc kubenswrapper[4793]: I0130 14:58:59.808516 4793 scope.go:117] "RemoveContainer" containerID="4fac99a830596ab4c8ccd92b20b16f13dd985af78b405e7b37963e7f8429ddf5" Jan 30 14:58:59 crc kubenswrapper[4793]: I0130 14:58:59.858401 4793 scope.go:117] "RemoveContainer" containerID="9797cc08d205a357f7341259f74f234a05068c9223b29e62a420e0ce3c9ec65f" Jan 30 14:58:59 crc kubenswrapper[4793]: I0130 14:58:59.895659 4793 scope.go:117] "RemoveContainer" containerID="8483acdb27d6a9e9c65f4dd466fd68c3f03a2b90fd7995dcc8394d42f7515fb8" Jan 30 14:59:05 crc kubenswrapper[4793]: I0130 14:59:05.398110 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:59:05 crc kubenswrapper[4793]: E0130 14:59:05.398951 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:59:16 crc kubenswrapper[4793]: I0130 14:59:16.398523 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:59:16 crc kubenswrapper[4793]: E0130 14:59:16.399384 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:59:29 crc kubenswrapper[4793]: I0130 14:59:29.398796 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:59:29 crc kubenswrapper[4793]: E0130 14:59:29.399605 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:59:40 crc kubenswrapper[4793]: I0130 14:59:40.404266 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:59:40 crc kubenswrapper[4793]: E0130 14:59:40.405111 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 14:59:53 crc kubenswrapper[4793]: I0130 14:59:53.399004 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 14:59:53 crc kubenswrapper[4793]: E0130 14:59:53.399723 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.187155 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc"] Jan 30 15:00:00 crc kubenswrapper[4793]: E0130 15:00:00.188091 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerName="extract-utilities" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.188107 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerName="extract-utilities" Jan 30 15:00:00 crc kubenswrapper[4793]: E0130 15:00:00.188124 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerName="registry-server" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.188132 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerName="registry-server" Jan 30 15:00:00 crc kubenswrapper[4793]: E0130 15:00:00.188167 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerName="extract-content" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.188200 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerName="extract-content" Jan 30 15:00:00 crc kubenswrapper[4793]: E0130 15:00:00.188224 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerName="extract-utilities" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.188232 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerName="extract-utilities" Jan 30 15:00:00 crc kubenswrapper[4793]: E0130 15:00:00.188252 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerName="extract-content" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.188259 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerName="extract-content" Jan 30 15:00:00 crc kubenswrapper[4793]: E0130 15:00:00.188276 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerName="registry-server" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.188283 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerName="registry-server" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.188515 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="53fa7ee2-40c6-42b2-83e7-91560b4ae614" containerName="registry-server" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.188530 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b63510-a909-4a19-83a9-7aeeae35c681" containerName="registry-server" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.189321 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.195887 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc"] Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.232329 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.271490 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eaa1894-c4b7-4c79-955c-7b713cbe1955-config-volume\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.271580 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eaa1894-c4b7-4c79-955c-7b713cbe1955-secret-volume\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.271603 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-898h6\" (UniqueName: \"kubernetes.io/projected/1eaa1894-c4b7-4c79-955c-7b713cbe1955-kube-api-access-898h6\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.373207 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eaa1894-c4b7-4c79-955c-7b713cbe1955-config-volume\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.373537 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eaa1894-c4b7-4c79-955c-7b713cbe1955-secret-volume\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.373615 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-898h6\" (UniqueName: \"kubernetes.io/projected/1eaa1894-c4b7-4c79-955c-7b713cbe1955-kube-api-access-898h6\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.374975 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eaa1894-c4b7-4c79-955c-7b713cbe1955-config-volume\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.392414 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eaa1894-c4b7-4c79-955c-7b713cbe1955-secret-volume\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.395876 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-898h6\" (UniqueName: \"kubernetes.io/projected/1eaa1894-c4b7-4c79-955c-7b713cbe1955-kube-api-access-898h6\") pod \"collect-profiles-29496420-prvqc\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.412660 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 15:00:00 crc kubenswrapper[4793]: I0130 15:00:00.563871 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:01 crc kubenswrapper[4793]: I0130 15:00:01.044770 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc"] Jan 30 15:00:01 crc kubenswrapper[4793]: I0130 15:00:01.604816 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" event={"ID":"1eaa1894-c4b7-4c79-955c-7b713cbe1955","Type":"ContainerStarted","Data":"4237192bc7a1eb44289a5eeb0516108067794976041ba4876322f83681ec69f1"} Jan 30 15:00:01 crc kubenswrapper[4793]: I0130 15:00:01.605036 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" event={"ID":"1eaa1894-c4b7-4c79-955c-7b713cbe1955","Type":"ContainerStarted","Data":"5e63b95c3d5fe03a218b269dd621485abf1eeaa28d316c45d93b54d2a97ba10d"} Jan 30 15:00:01 crc kubenswrapper[4793]: I0130 15:00:01.622442 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" podStartSLOduration=1.622418827 podStartE2EDuration="1.622418827s" podCreationTimestamp="2026-01-30 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:00:01.618566162 +0000 UTC m=+4612.319914653" watchObservedRunningTime="2026-01-30 15:00:01.622418827 +0000 UTC m=+4612.323767318" Jan 30 15:00:02 crc kubenswrapper[4793]: I0130 15:00:02.613663 4793 generic.go:334] "Generic (PLEG): container finished" podID="1eaa1894-c4b7-4c79-955c-7b713cbe1955" containerID="4237192bc7a1eb44289a5eeb0516108067794976041ba4876322f83681ec69f1" exitCode=0 Jan 30 15:00:02 crc kubenswrapper[4793]: I0130 15:00:02.613905 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" event={"ID":"1eaa1894-c4b7-4c79-955c-7b713cbe1955","Type":"ContainerDied","Data":"4237192bc7a1eb44289a5eeb0516108067794976041ba4876322f83681ec69f1"} Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.053000 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.157836 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eaa1894-c4b7-4c79-955c-7b713cbe1955-config-volume\") pod \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.157941 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eaa1894-c4b7-4c79-955c-7b713cbe1955-secret-volume\") pod \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.158070 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-898h6\" (UniqueName: \"kubernetes.io/projected/1eaa1894-c4b7-4c79-955c-7b713cbe1955-kube-api-access-898h6\") pod \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\" (UID: \"1eaa1894-c4b7-4c79-955c-7b713cbe1955\") " Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.158690 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eaa1894-c4b7-4c79-955c-7b713cbe1955-config-volume" (OuterVolumeSpecName: "config-volume") pod "1eaa1894-c4b7-4c79-955c-7b713cbe1955" (UID: "1eaa1894-c4b7-4c79-955c-7b713cbe1955"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.171260 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eaa1894-c4b7-4c79-955c-7b713cbe1955-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1eaa1894-c4b7-4c79-955c-7b713cbe1955" (UID: "1eaa1894-c4b7-4c79-955c-7b713cbe1955"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.172306 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eaa1894-c4b7-4c79-955c-7b713cbe1955-kube-api-access-898h6" (OuterVolumeSpecName: "kube-api-access-898h6") pod "1eaa1894-c4b7-4c79-955c-7b713cbe1955" (UID: "1eaa1894-c4b7-4c79-955c-7b713cbe1955"). InnerVolumeSpecName "kube-api-access-898h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.260796 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-898h6\" (UniqueName: \"kubernetes.io/projected/1eaa1894-c4b7-4c79-955c-7b713cbe1955-kube-api-access-898h6\") on node \"crc\" DevicePath \"\"" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.260828 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eaa1894-c4b7-4c79-955c-7b713cbe1955-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.260840 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eaa1894-c4b7-4c79-955c-7b713cbe1955-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.641423 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" event={"ID":"1eaa1894-c4b7-4c79-955c-7b713cbe1955","Type":"ContainerDied","Data":"5e63b95c3d5fe03a218b269dd621485abf1eeaa28d316c45d93b54d2a97ba10d"} Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.641466 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e63b95c3d5fe03a218b269dd621485abf1eeaa28d316c45d93b54d2a97ba10d" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.641523 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496420-prvqc" Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.712731 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn"] Jan 30 15:00:04 crc kubenswrapper[4793]: I0130 15:00:04.720809 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496375-trbfn"] Jan 30 15:00:06 crc kubenswrapper[4793]: I0130 15:00:06.411558 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea958b8-aeb8-4696-b604-f1459d6d5608" path="/var/lib/kubelet/pods/dea958b8-aeb8-4696-b604-f1459d6d5608/volumes" Jan 30 15:00:08 crc kubenswrapper[4793]: I0130 15:00:08.399380 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 15:00:08 crc kubenswrapper[4793]: E0130 15:00:08.399938 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:00:23 crc kubenswrapper[4793]: I0130 15:00:23.398329 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 15:00:23 crc kubenswrapper[4793]: E0130 15:00:23.400259 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:00:32 crc kubenswrapper[4793]: I0130 15:00:32.936185 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xjtpf"] Jan 30 15:00:32 crc kubenswrapper[4793]: E0130 15:00:32.937852 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eaa1894-c4b7-4c79-955c-7b713cbe1955" containerName="collect-profiles" Jan 30 15:00:32 crc kubenswrapper[4793]: I0130 15:00:32.937868 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eaa1894-c4b7-4c79-955c-7b713cbe1955" containerName="collect-profiles" Jan 30 15:00:32 crc kubenswrapper[4793]: I0130 15:00:32.938124 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eaa1894-c4b7-4c79-955c-7b713cbe1955" containerName="collect-profiles" Jan 30 15:00:32 crc kubenswrapper[4793]: I0130 15:00:32.939873 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:32 crc kubenswrapper[4793]: I0130 15:00:32.955862 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjtpf"] Jan 30 15:00:32 crc kubenswrapper[4793]: I0130 15:00:32.967127 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78vhd\" (UniqueName: \"kubernetes.io/projected/4020bc12-6cb5-4f85-9298-32e7874c7946-kube-api-access-78vhd\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:32 crc kubenswrapper[4793]: I0130 15:00:32.967210 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-utilities\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:32 crc kubenswrapper[4793]: I0130 15:00:32.967373 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-catalog-content\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.068923 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78vhd\" (UniqueName: \"kubernetes.io/projected/4020bc12-6cb5-4f85-9298-32e7874c7946-kube-api-access-78vhd\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.069002 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-utilities\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.069159 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-catalog-content\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.069811 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-utilities\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.069979 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-catalog-content\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.088012 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78vhd\" (UniqueName: \"kubernetes.io/projected/4020bc12-6cb5-4f85-9298-32e7874c7946-kube-api-access-78vhd\") pod \"redhat-marketplace-xjtpf\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.271815 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.799222 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjtpf"] Jan 30 15:00:33 crc kubenswrapper[4793]: I0130 15:00:33.902015 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjtpf" event={"ID":"4020bc12-6cb5-4f85-9298-32e7874c7946","Type":"ContainerStarted","Data":"38b88db308377b3dbfec0ff500616be7f84f028d8a80cd35485f2bde95e3437f"} Jan 30 15:00:34 crc kubenswrapper[4793]: I0130 15:00:34.407168 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 15:00:34 crc kubenswrapper[4793]: E0130 15:00:34.407758 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:00:34 crc kubenswrapper[4793]: I0130 15:00:34.915673 4793 generic.go:334] "Generic (PLEG): container finished" podID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerID="ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215" exitCode=0 Jan 30 15:00:34 crc kubenswrapper[4793]: I0130 15:00:34.916011 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjtpf" event={"ID":"4020bc12-6cb5-4f85-9298-32e7874c7946","Type":"ContainerDied","Data":"ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215"} Jan 30 15:00:34 crc kubenswrapper[4793]: I0130 15:00:34.920986 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 15:00:36 crc kubenswrapper[4793]: I0130 15:00:36.937998 4793 generic.go:334] "Generic (PLEG): container finished" podID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerID="68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede" exitCode=0 Jan 30 15:00:36 crc kubenswrapper[4793]: I0130 15:00:36.938084 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjtpf" event={"ID":"4020bc12-6cb5-4f85-9298-32e7874c7946","Type":"ContainerDied","Data":"68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede"} Jan 30 15:00:38 crc kubenswrapper[4793]: I0130 15:00:38.959951 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjtpf" event={"ID":"4020bc12-6cb5-4f85-9298-32e7874c7946","Type":"ContainerStarted","Data":"07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1"} Jan 30 15:00:39 crc kubenswrapper[4793]: I0130 15:00:39.000660 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xjtpf" podStartSLOduration=4.030463643 podStartE2EDuration="7.000643803s" podCreationTimestamp="2026-01-30 15:00:32 +0000 UTC" firstStartedPulling="2026-01-30 15:00:34.920752804 +0000 UTC m=+4645.622101295" lastFinishedPulling="2026-01-30 15:00:37.890932964 +0000 UTC m=+4648.592281455" observedRunningTime="2026-01-30 15:00:38.997507736 +0000 UTC m=+4649.698856247" watchObservedRunningTime="2026-01-30 15:00:39.000643803 +0000 UTC m=+4649.701992294" Jan 30 15:00:43 crc kubenswrapper[4793]: I0130 15:00:43.272163 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:43 crc kubenswrapper[4793]: I0130 15:00:43.273639 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:43 crc kubenswrapper[4793]: I0130 15:00:43.321689 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:44 crc kubenswrapper[4793]: I0130 15:00:44.464275 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:44 crc kubenswrapper[4793]: I0130 15:00:44.519971 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjtpf"] Jan 30 15:00:46 crc kubenswrapper[4793]: I0130 15:00:46.022124 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xjtpf" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerName="registry-server" containerID="cri-o://07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1" gracePeriod=2 Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.017694 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.041561 4793 generic.go:334] "Generic (PLEG): container finished" podID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerID="07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1" exitCode=0 Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.041604 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjtpf" event={"ID":"4020bc12-6cb5-4f85-9298-32e7874c7946","Type":"ContainerDied","Data":"07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1"} Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.041632 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjtpf" event={"ID":"4020bc12-6cb5-4f85-9298-32e7874c7946","Type":"ContainerDied","Data":"38b88db308377b3dbfec0ff500616be7f84f028d8a80cd35485f2bde95e3437f"} Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.041652 4793 scope.go:117] "RemoveContainer" containerID="07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.041671 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjtpf" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.044231 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-catalog-content\") pod \"4020bc12-6cb5-4f85-9298-32e7874c7946\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.074248 4793 scope.go:117] "RemoveContainer" containerID="68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.094321 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4020bc12-6cb5-4f85-9298-32e7874c7946" (UID: "4020bc12-6cb5-4f85-9298-32e7874c7946"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.110179 4793 scope.go:117] "RemoveContainer" containerID="ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.147698 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78vhd\" (UniqueName: \"kubernetes.io/projected/4020bc12-6cb5-4f85-9298-32e7874c7946-kube-api-access-78vhd\") pod \"4020bc12-6cb5-4f85-9298-32e7874c7946\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.147775 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-utilities\") pod \"4020bc12-6cb5-4f85-9298-32e7874c7946\" (UID: \"4020bc12-6cb5-4f85-9298-32e7874c7946\") " Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.148347 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.149291 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-utilities" (OuterVolumeSpecName: "utilities") pod "4020bc12-6cb5-4f85-9298-32e7874c7946" (UID: "4020bc12-6cb5-4f85-9298-32e7874c7946"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.154747 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4020bc12-6cb5-4f85-9298-32e7874c7946-kube-api-access-78vhd" (OuterVolumeSpecName: "kube-api-access-78vhd") pod "4020bc12-6cb5-4f85-9298-32e7874c7946" (UID: "4020bc12-6cb5-4f85-9298-32e7874c7946"). InnerVolumeSpecName "kube-api-access-78vhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.157034 4793 scope.go:117] "RemoveContainer" containerID="07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1" Jan 30 15:00:47 crc kubenswrapper[4793]: E0130 15:00:47.157675 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1\": container with ID starting with 07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1 not found: ID does not exist" containerID="07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.157731 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1"} err="failed to get container status \"07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1\": rpc error: code = NotFound desc = could not find container \"07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1\": container with ID starting with 07fb2f2c5995bc8f7515d96c05fb7d791ca29d51f09b4f32d4b9e85c4acea2b1 not found: ID does not exist" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.157762 4793 scope.go:117] "RemoveContainer" containerID="68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede" Jan 30 15:00:47 crc kubenswrapper[4793]: E0130 15:00:47.158311 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede\": container with ID starting with 68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede not found: ID does not exist" containerID="68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.158413 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede"} err="failed to get container status \"68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede\": rpc error: code = NotFound desc = could not find container \"68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede\": container with ID starting with 68dfe97b98d332ba42df4d540ad5153d423389753d9212e7a4fd87e79987dede not found: ID does not exist" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.158501 4793 scope.go:117] "RemoveContainer" containerID="ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215" Jan 30 15:00:47 crc kubenswrapper[4793]: E0130 15:00:47.158935 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215\": container with ID starting with ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215 not found: ID does not exist" containerID="ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.158971 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215"} err="failed to get container status \"ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215\": rpc error: code = NotFound desc = could not find container \"ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215\": container with ID starting with ec0af277c399664eee86ad64869994cf2ffb1bd8e9977c88a04d364bdb7b6215 not found: ID does not exist" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.249996 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78vhd\" (UniqueName: \"kubernetes.io/projected/4020bc12-6cb5-4f85-9298-32e7874c7946-kube-api-access-78vhd\") on node \"crc\" DevicePath \"\"" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.250315 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4020bc12-6cb5-4f85-9298-32e7874c7946-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.379896 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjtpf"] Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.388469 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjtpf"] Jan 30 15:00:47 crc kubenswrapper[4793]: I0130 15:00:47.398621 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 15:00:47 crc kubenswrapper[4793]: E0130 15:00:47.398955 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:00:48 crc kubenswrapper[4793]: I0130 15:00:48.409964 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" path="/var/lib/kubelet/pods/4020bc12-6cb5-4f85-9298-32e7874c7946/volumes" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.184661 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-crpmz"] Jan 30 15:00:58 crc kubenswrapper[4793]: E0130 15:00:58.186848 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerName="extract-content" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.186922 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerName="extract-content" Jan 30 15:00:58 crc kubenswrapper[4793]: E0130 15:00:58.187019 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerName="registry-server" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.187176 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerName="registry-server" Jan 30 15:00:58 crc kubenswrapper[4793]: E0130 15:00:58.187232 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerName="extract-utilities" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.187278 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerName="extract-utilities" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.187588 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4020bc12-6cb5-4f85-9298-32e7874c7946" containerName="registry-server" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.189194 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.196468 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-crpmz"] Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.291203 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-utilities\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.291356 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m7vq\" (UniqueName: \"kubernetes.io/projected/c7abe19e-d694-43f4-b261-cdf9b3e60681-kube-api-access-9m7vq\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.291467 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-catalog-content\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.393752 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m7vq\" (UniqueName: \"kubernetes.io/projected/c7abe19e-d694-43f4-b261-cdf9b3e60681-kube-api-access-9m7vq\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.393828 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-catalog-content\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.393914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-utilities\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.394392 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-utilities\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.394469 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-catalog-content\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.416623 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m7vq\" (UniqueName: \"kubernetes.io/projected/c7abe19e-d694-43f4-b261-cdf9b3e60681-kube-api-access-9m7vq\") pod \"redhat-operators-crpmz\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:58 crc kubenswrapper[4793]: I0130 15:00:58.546131 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:00:59 crc kubenswrapper[4793]: I0130 15:00:59.059304 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-crpmz"] Jan 30 15:00:59 crc kubenswrapper[4793]: I0130 15:00:59.154193 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-crpmz" event={"ID":"c7abe19e-d694-43f4-b261-cdf9b3e60681","Type":"ContainerStarted","Data":"fb93cb9ff568521eef67dcd73afac2fcad2954cae531e13237dc9dddfdefc166"} Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.047323 4793 scope.go:117] "RemoveContainer" containerID="169c63fb85351a767003e368e147b08afafad5a61c0c77bb947c35a8af5282ae" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.167346 4793 generic.go:334] "Generic (PLEG): container finished" podID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerID="eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c" exitCode=0 Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.167459 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-crpmz" event={"ID":"c7abe19e-d694-43f4-b261-cdf9b3e60681","Type":"ContainerDied","Data":"eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c"} Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.179346 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496421-n28p5"] Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.181435 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.223761 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496421-n28p5"] Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.226898 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-fernet-keys\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.226939 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqlj2\" (UniqueName: \"kubernetes.io/projected/617a2857-c4b0-4558-9834-551a98cd534f-kube-api-access-nqlj2\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.226982 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-config-data\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.227011 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-combined-ca-bundle\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.329339 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-fernet-keys\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.329402 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqlj2\" (UniqueName: \"kubernetes.io/projected/617a2857-c4b0-4558-9834-551a98cd534f-kube-api-access-nqlj2\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.329439 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-config-data\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.329470 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-combined-ca-bundle\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.335790 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-fernet-keys\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.335803 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-combined-ca-bundle\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.341310 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-config-data\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.352486 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqlj2\" (UniqueName: \"kubernetes.io/projected/617a2857-c4b0-4558-9834-551a98cd534f-kube-api-access-nqlj2\") pod \"keystone-cron-29496421-n28p5\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:00 crc kubenswrapper[4793]: I0130 15:01:00.507940 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:01 crc kubenswrapper[4793]: I0130 15:01:01.096775 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496421-n28p5"] Jan 30 15:01:01 crc kubenswrapper[4793]: W0130 15:01:01.100395 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod617a2857_c4b0_4558_9834_551a98cd534f.slice/crio-22e75b2682355a53c000cf2d7322b5edb68873a2609369389f9d1dd037464337 WatchSource:0}: Error finding container 22e75b2682355a53c000cf2d7322b5edb68873a2609369389f9d1dd037464337: Status 404 returned error can't find the container with id 22e75b2682355a53c000cf2d7322b5edb68873a2609369389f9d1dd037464337 Jan 30 15:01:01 crc kubenswrapper[4793]: I0130 15:01:01.194140 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496421-n28p5" event={"ID":"617a2857-c4b0-4558-9834-551a98cd534f","Type":"ContainerStarted","Data":"22e75b2682355a53c000cf2d7322b5edb68873a2609369389f9d1dd037464337"} Jan 30 15:01:02 crc kubenswrapper[4793]: I0130 15:01:02.206160 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-crpmz" event={"ID":"c7abe19e-d694-43f4-b261-cdf9b3e60681","Type":"ContainerStarted","Data":"98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e"} Jan 30 15:01:02 crc kubenswrapper[4793]: I0130 15:01:02.208026 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496421-n28p5" event={"ID":"617a2857-c4b0-4558-9834-551a98cd534f","Type":"ContainerStarted","Data":"1596cdc010d60aaf0a6cebd1da4a3bfed114acf0f745eba93f905ae48089cb08"} Jan 30 15:01:02 crc kubenswrapper[4793]: I0130 15:01:02.258068 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496421-n28p5" podStartSLOduration=2.258032564 podStartE2EDuration="2.258032564s" podCreationTimestamp="2026-01-30 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:01:02.246897491 +0000 UTC m=+4672.948246002" watchObservedRunningTime="2026-01-30 15:01:02.258032564 +0000 UTC m=+4672.959381055" Jan 30 15:01:02 crc kubenswrapper[4793]: I0130 15:01:02.401076 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 15:01:02 crc kubenswrapper[4793]: E0130 15:01:02.401364 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:01:13 crc kubenswrapper[4793]: I0130 15:01:13.291653 4793 generic.go:334] "Generic (PLEG): container finished" podID="617a2857-c4b0-4558-9834-551a98cd534f" containerID="1596cdc010d60aaf0a6cebd1da4a3bfed114acf0f745eba93f905ae48089cb08" exitCode=0 Jan 30 15:01:13 crc kubenswrapper[4793]: I0130 15:01:13.291731 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496421-n28p5" event={"ID":"617a2857-c4b0-4558-9834-551a98cd534f","Type":"ContainerDied","Data":"1596cdc010d60aaf0a6cebd1da4a3bfed114acf0f745eba93f905ae48089cb08"} Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.398713 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.746609 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.827615 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-fernet-keys\") pod \"617a2857-c4b0-4558-9834-551a98cd534f\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.827737 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-combined-ca-bundle\") pod \"617a2857-c4b0-4558-9834-551a98cd534f\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.827880 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqlj2\" (UniqueName: \"kubernetes.io/projected/617a2857-c4b0-4558-9834-551a98cd534f-kube-api-access-nqlj2\") pod \"617a2857-c4b0-4558-9834-551a98cd534f\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.828072 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-config-data\") pod \"617a2857-c4b0-4558-9834-551a98cd534f\" (UID: \"617a2857-c4b0-4558-9834-551a98cd534f\") " Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.844889 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/617a2857-c4b0-4558-9834-551a98cd534f-kube-api-access-nqlj2" (OuterVolumeSpecName: "kube-api-access-nqlj2") pod "617a2857-c4b0-4558-9834-551a98cd534f" (UID: "617a2857-c4b0-4558-9834-551a98cd534f"). InnerVolumeSpecName "kube-api-access-nqlj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.851145 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "617a2857-c4b0-4558-9834-551a98cd534f" (UID: "617a2857-c4b0-4558-9834-551a98cd534f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.924257 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "617a2857-c4b0-4558-9834-551a98cd534f" (UID: "617a2857-c4b0-4558-9834-551a98cd534f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.932672 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.932711 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqlj2\" (UniqueName: \"kubernetes.io/projected/617a2857-c4b0-4558-9834-551a98cd534f-kube-api-access-nqlj2\") on node \"crc\" DevicePath \"\"" Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.932727 4793 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 15:01:14 crc kubenswrapper[4793]: I0130 15:01:14.989201 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-config-data" (OuterVolumeSpecName: "config-data") pod "617a2857-c4b0-4558-9834-551a98cd534f" (UID: "617a2857-c4b0-4558-9834-551a98cd534f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:01:15 crc kubenswrapper[4793]: I0130 15:01:15.034360 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/617a2857-c4b0-4558-9834-551a98cd534f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 15:01:15 crc kubenswrapper[4793]: I0130 15:01:15.310213 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"c38987640cf280e4c02e580e84a0e7564fa5243ab30c792c5125d7350150b8b0"} Jan 30 15:01:15 crc kubenswrapper[4793]: I0130 15:01:15.313303 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496421-n28p5" event={"ID":"617a2857-c4b0-4558-9834-551a98cd534f","Type":"ContainerDied","Data":"22e75b2682355a53c000cf2d7322b5edb68873a2609369389f9d1dd037464337"} Jan 30 15:01:15 crc kubenswrapper[4793]: I0130 15:01:15.313347 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22e75b2682355a53c000cf2d7322b5edb68873a2609369389f9d1dd037464337" Jan 30 15:01:15 crc kubenswrapper[4793]: I0130 15:01:15.313408 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496421-n28p5" Jan 30 15:01:17 crc kubenswrapper[4793]: I0130 15:01:17.332569 4793 generic.go:334] "Generic (PLEG): container finished" podID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerID="98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e" exitCode=0 Jan 30 15:01:17 crc kubenswrapper[4793]: I0130 15:01:17.332632 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-crpmz" event={"ID":"c7abe19e-d694-43f4-b261-cdf9b3e60681","Type":"ContainerDied","Data":"98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e"} Jan 30 15:01:19 crc kubenswrapper[4793]: I0130 15:01:19.351999 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-crpmz" event={"ID":"c7abe19e-d694-43f4-b261-cdf9b3e60681","Type":"ContainerStarted","Data":"1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88"} Jan 30 15:01:19 crc kubenswrapper[4793]: I0130 15:01:19.377976 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-crpmz" podStartSLOduration=3.516454064 podStartE2EDuration="21.377954867s" podCreationTimestamp="2026-01-30 15:00:58 +0000 UTC" firstStartedPulling="2026-01-30 15:01:00.210973105 +0000 UTC m=+4670.912321586" lastFinishedPulling="2026-01-30 15:01:18.072473898 +0000 UTC m=+4688.773822389" observedRunningTime="2026-01-30 15:01:19.375182329 +0000 UTC m=+4690.076530820" watchObservedRunningTime="2026-01-30 15:01:19.377954867 +0000 UTC m=+4690.079303358" Jan 30 15:01:28 crc kubenswrapper[4793]: I0130 15:01:28.546432 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:01:28 crc kubenswrapper[4793]: I0130 15:01:28.546949 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:01:29 crc kubenswrapper[4793]: I0130 15:01:29.603935 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-crpmz" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="registry-server" probeResult="failure" output=< Jan 30 15:01:29 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:01:29 crc kubenswrapper[4793]: > Jan 30 15:01:39 crc kubenswrapper[4793]: I0130 15:01:39.594748 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-crpmz" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="registry-server" probeResult="failure" output=< Jan 30 15:01:39 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:01:39 crc kubenswrapper[4793]: > Jan 30 15:01:49 crc kubenswrapper[4793]: I0130 15:01:49.599881 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-crpmz" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="registry-server" probeResult="failure" output=< Jan 30 15:01:49 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:01:49 crc kubenswrapper[4793]: > Jan 30 15:01:58 crc kubenswrapper[4793]: I0130 15:01:58.686013 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:01:58 crc kubenswrapper[4793]: I0130 15:01:58.745199 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:01:59 crc kubenswrapper[4793]: I0130 15:01:59.402854 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-crpmz"] Jan 30 15:01:59 crc kubenswrapper[4793]: I0130 15:01:59.717643 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-crpmz" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="registry-server" containerID="cri-o://1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88" gracePeriod=2 Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.439801 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.580456 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-catalog-content\") pod \"c7abe19e-d694-43f4-b261-cdf9b3e60681\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.580588 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-utilities\") pod \"c7abe19e-d694-43f4-b261-cdf9b3e60681\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.580733 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m7vq\" (UniqueName: \"kubernetes.io/projected/c7abe19e-d694-43f4-b261-cdf9b3e60681-kube-api-access-9m7vq\") pod \"c7abe19e-d694-43f4-b261-cdf9b3e60681\" (UID: \"c7abe19e-d694-43f4-b261-cdf9b3e60681\") " Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.581484 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-utilities" (OuterVolumeSpecName: "utilities") pod "c7abe19e-d694-43f4-b261-cdf9b3e60681" (UID: "c7abe19e-d694-43f4-b261-cdf9b3e60681"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.582520 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.598326 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7abe19e-d694-43f4-b261-cdf9b3e60681-kube-api-access-9m7vq" (OuterVolumeSpecName: "kube-api-access-9m7vq") pod "c7abe19e-d694-43f4-b261-cdf9b3e60681" (UID: "c7abe19e-d694-43f4-b261-cdf9b3e60681"). InnerVolumeSpecName "kube-api-access-9m7vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.684783 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m7vq\" (UniqueName: \"kubernetes.io/projected/c7abe19e-d694-43f4-b261-cdf9b3e60681-kube-api-access-9m7vq\") on node \"crc\" DevicePath \"\"" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.710406 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7abe19e-d694-43f4-b261-cdf9b3e60681" (UID: "c7abe19e-d694-43f4-b261-cdf9b3e60681"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.728264 4793 generic.go:334] "Generic (PLEG): container finished" podID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerID="1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88" exitCode=0 Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.728309 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-crpmz" event={"ID":"c7abe19e-d694-43f4-b261-cdf9b3e60681","Type":"ContainerDied","Data":"1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88"} Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.728332 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-crpmz" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.728358 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-crpmz" event={"ID":"c7abe19e-d694-43f4-b261-cdf9b3e60681","Type":"ContainerDied","Data":"fb93cb9ff568521eef67dcd73afac2fcad2954cae531e13237dc9dddfdefc166"} Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.728379 4793 scope.go:117] "RemoveContainer" containerID="1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.748591 4793 scope.go:117] "RemoveContainer" containerID="98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.767738 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-crpmz"] Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.775867 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-crpmz"] Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.778483 4793 scope.go:117] "RemoveContainer" containerID="eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.786331 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7abe19e-d694-43f4-b261-cdf9b3e60681-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.825174 4793 scope.go:117] "RemoveContainer" containerID="1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88" Jan 30 15:02:00 crc kubenswrapper[4793]: E0130 15:02:00.825618 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88\": container with ID starting with 1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88 not found: ID does not exist" containerID="1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.825658 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88"} err="failed to get container status \"1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88\": rpc error: code = NotFound desc = could not find container \"1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88\": container with ID starting with 1b7200aa33fcb5b09a6d68eb3810287788382822cf378c399d03a2b28f69bf88 not found: ID does not exist" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.825683 4793 scope.go:117] "RemoveContainer" containerID="98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e" Jan 30 15:02:00 crc kubenswrapper[4793]: E0130 15:02:00.827611 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e\": container with ID starting with 98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e not found: ID does not exist" containerID="98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.827687 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e"} err="failed to get container status \"98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e\": rpc error: code = NotFound desc = could not find container \"98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e\": container with ID starting with 98a0dea70129c6b84c69967e6f4266b941b290a870a31982b8a9c97507b9442e not found: ID does not exist" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.827705 4793 scope.go:117] "RemoveContainer" containerID="eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c" Jan 30 15:02:00 crc kubenswrapper[4793]: E0130 15:02:00.828116 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c\": container with ID starting with eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c not found: ID does not exist" containerID="eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c" Jan 30 15:02:00 crc kubenswrapper[4793]: I0130 15:02:00.828149 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c"} err="failed to get container status \"eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c\": rpc error: code = NotFound desc = could not find container \"eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c\": container with ID starting with eac9506b086d692f93763edc8221b080b04b230e45104280bd14e95a08eea43c not found: ID does not exist" Jan 30 15:02:02 crc kubenswrapper[4793]: I0130 15:02:02.410412 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" path="/var/lib/kubelet/pods/c7abe19e-d694-43f4-b261-cdf9b3e60681/volumes" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.381811 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cdw7k"] Jan 30 15:02:08 crc kubenswrapper[4793]: E0130 15:02:08.382668 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="extract-utilities" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.382683 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="extract-utilities" Jan 30 15:02:08 crc kubenswrapper[4793]: E0130 15:02:08.382710 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617a2857-c4b0-4558-9834-551a98cd534f" containerName="keystone-cron" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.382717 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="617a2857-c4b0-4558-9834-551a98cd534f" containerName="keystone-cron" Jan 30 15:02:08 crc kubenswrapper[4793]: E0130 15:02:08.382727 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="registry-server" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.382733 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="registry-server" Jan 30 15:02:08 crc kubenswrapper[4793]: E0130 15:02:08.382751 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="extract-content" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.382757 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="extract-content" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.382926 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="617a2857-c4b0-4558-9834-551a98cd534f" containerName="keystone-cron" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.382949 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7abe19e-d694-43f4-b261-cdf9b3e60681" containerName="registry-server" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.384500 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.426989 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdw7k"] Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.437018 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-utilities\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.437091 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-catalog-content\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.437188 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hng6j\" (UniqueName: \"kubernetes.io/projected/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-kube-api-access-hng6j\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.538656 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hng6j\" (UniqueName: \"kubernetes.io/projected/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-kube-api-access-hng6j\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.539026 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-utilities\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.539188 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-catalog-content\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.539978 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-catalog-content\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.540581 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-utilities\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:08 crc kubenswrapper[4793]: I0130 15:02:08.829601 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hng6j\" (UniqueName: \"kubernetes.io/projected/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-kube-api-access-hng6j\") pod \"certified-operators-cdw7k\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:09 crc kubenswrapper[4793]: I0130 15:02:09.024343 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:09 crc kubenswrapper[4793]: I0130 15:02:09.533551 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdw7k"] Jan 30 15:02:09 crc kubenswrapper[4793]: W0130 15:02:09.548949 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5a5ddb3_fbef_4413_a123_81ea7ce9adf7.slice/crio-f0fef69f2593a16b3c150f12e6d51f43a217cede57521b9a80d8fb238f2b3341 WatchSource:0}: Error finding container f0fef69f2593a16b3c150f12e6d51f43a217cede57521b9a80d8fb238f2b3341: Status 404 returned error can't find the container with id f0fef69f2593a16b3c150f12e6d51f43a217cede57521b9a80d8fb238f2b3341 Jan 30 15:02:09 crc kubenswrapper[4793]: I0130 15:02:09.806018 4793 generic.go:334] "Generic (PLEG): container finished" podID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerID="9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71" exitCode=0 Jan 30 15:02:09 crc kubenswrapper[4793]: I0130 15:02:09.806294 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdw7k" event={"ID":"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7","Type":"ContainerDied","Data":"9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71"} Jan 30 15:02:09 crc kubenswrapper[4793]: I0130 15:02:09.806320 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdw7k" event={"ID":"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7","Type":"ContainerStarted","Data":"f0fef69f2593a16b3c150f12e6d51f43a217cede57521b9a80d8fb238f2b3341"} Jan 30 15:02:12 crc kubenswrapper[4793]: I0130 15:02:12.847521 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdw7k" event={"ID":"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7","Type":"ContainerStarted","Data":"17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736"} Jan 30 15:02:14 crc kubenswrapper[4793]: I0130 15:02:14.871496 4793 generic.go:334] "Generic (PLEG): container finished" podID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerID="17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736" exitCode=0 Jan 30 15:02:14 crc kubenswrapper[4793]: I0130 15:02:14.872067 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdw7k" event={"ID":"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7","Type":"ContainerDied","Data":"17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736"} Jan 30 15:02:16 crc kubenswrapper[4793]: I0130 15:02:16.892467 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdw7k" event={"ID":"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7","Type":"ContainerStarted","Data":"6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3"} Jan 30 15:02:16 crc kubenswrapper[4793]: I0130 15:02:16.917516 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cdw7k" podStartSLOduration=3.939188581 podStartE2EDuration="8.917496046s" podCreationTimestamp="2026-01-30 15:02:08 +0000 UTC" firstStartedPulling="2026-01-30 15:02:10.818266458 +0000 UTC m=+4741.519614949" lastFinishedPulling="2026-01-30 15:02:15.796573923 +0000 UTC m=+4746.497922414" observedRunningTime="2026-01-30 15:02:16.912732159 +0000 UTC m=+4747.614080660" watchObservedRunningTime="2026-01-30 15:02:16.917496046 +0000 UTC m=+4747.618844537" Jan 30 15:02:19 crc kubenswrapper[4793]: I0130 15:02:19.024574 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:19 crc kubenswrapper[4793]: I0130 15:02:19.024881 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:19 crc kubenswrapper[4793]: I0130 15:02:19.074067 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:29 crc kubenswrapper[4793]: I0130 15:02:29.075806 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:29 crc kubenswrapper[4793]: I0130 15:02:29.121611 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdw7k"] Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.012733 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cdw7k" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerName="registry-server" containerID="cri-o://6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3" gracePeriod=2 Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.600399 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.707639 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-catalog-content\") pod \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.707797 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-utilities\") pod \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.707842 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hng6j\" (UniqueName: \"kubernetes.io/projected/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-kube-api-access-hng6j\") pod \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\" (UID: \"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7\") " Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.708765 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-utilities" (OuterVolumeSpecName: "utilities") pod "c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" (UID: "c5a5ddb3-fbef-4413-a123-81ea7ce9adf7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.713592 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-kube-api-access-hng6j" (OuterVolumeSpecName: "kube-api-access-hng6j") pod "c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" (UID: "c5a5ddb3-fbef-4413-a123-81ea7ce9adf7"). InnerVolumeSpecName "kube-api-access-hng6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.757231 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" (UID: "c5a5ddb3-fbef-4413-a123-81ea7ce9adf7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.810473 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.810511 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:02:30 crc kubenswrapper[4793]: I0130 15:02:30.810522 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hng6j\" (UniqueName: \"kubernetes.io/projected/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7-kube-api-access-hng6j\") on node \"crc\" DevicePath \"\"" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.026635 4793 generic.go:334] "Generic (PLEG): container finished" podID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerID="6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3" exitCode=0 Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.026677 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdw7k" event={"ID":"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7","Type":"ContainerDied","Data":"6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3"} Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.026699 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdw7k" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.027192 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdw7k" event={"ID":"c5a5ddb3-fbef-4413-a123-81ea7ce9adf7","Type":"ContainerDied","Data":"f0fef69f2593a16b3c150f12e6d51f43a217cede57521b9a80d8fb238f2b3341"} Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.027207 4793 scope.go:117] "RemoveContainer" containerID="6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.058099 4793 scope.go:117] "RemoveContainer" containerID="17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.075173 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdw7k"] Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.085592 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cdw7k"] Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.451467 4793 scope.go:117] "RemoveContainer" containerID="9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.516033 4793 scope.go:117] "RemoveContainer" containerID="6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3" Jan 30 15:02:31 crc kubenswrapper[4793]: E0130 15:02:31.516529 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3\": container with ID starting with 6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3 not found: ID does not exist" containerID="6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.516569 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3"} err="failed to get container status \"6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3\": rpc error: code = NotFound desc = could not find container \"6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3\": container with ID starting with 6baafb77ba67cc64206ff939c5e5caed8647aba3bd013590b197e8e7f4c40cb3 not found: ID does not exist" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.516591 4793 scope.go:117] "RemoveContainer" containerID="17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736" Jan 30 15:02:31 crc kubenswrapper[4793]: E0130 15:02:31.517002 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736\": container with ID starting with 17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736 not found: ID does not exist" containerID="17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.517142 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736"} err="failed to get container status \"17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736\": rpc error: code = NotFound desc = could not find container \"17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736\": container with ID starting with 17ba1f213607d5464f265566c1ffd26e0a1e4c5aa051d2c1d85734e8993be736 not found: ID does not exist" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.517241 4793 scope.go:117] "RemoveContainer" containerID="9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71" Jan 30 15:02:31 crc kubenswrapper[4793]: E0130 15:02:31.517617 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71\": container with ID starting with 9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71 not found: ID does not exist" containerID="9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71" Jan 30 15:02:31 crc kubenswrapper[4793]: I0130 15:02:31.517642 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71"} err="failed to get container status \"9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71\": rpc error: code = NotFound desc = could not find container \"9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71\": container with ID starting with 9121156ff57c9cbdb42e1515a055ed9bab0a4794ebfb33d1582324a6778baf71 not found: ID does not exist" Jan 30 15:02:32 crc kubenswrapper[4793]: I0130 15:02:32.428125 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" path="/var/lib/kubelet/pods/c5a5ddb3-fbef-4413-a123-81ea7ce9adf7/volumes" Jan 30 15:03:42 crc kubenswrapper[4793]: I0130 15:03:42.414081 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:03:42 crc kubenswrapper[4793]: I0130 15:03:42.414718 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:04:12 crc kubenswrapper[4793]: I0130 15:04:12.413701 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:04:12 crc kubenswrapper[4793]: I0130 15:04:12.414464 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:04:42 crc kubenswrapper[4793]: I0130 15:04:42.413343 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:04:42 crc kubenswrapper[4793]: I0130 15:04:42.413911 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:04:42 crc kubenswrapper[4793]: I0130 15:04:42.413961 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 15:04:42 crc kubenswrapper[4793]: I0130 15:04:42.414793 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c38987640cf280e4c02e580e84a0e7564fa5243ab30c792c5125d7350150b8b0"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 15:04:42 crc kubenswrapper[4793]: I0130 15:04:42.414853 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://c38987640cf280e4c02e580e84a0e7564fa5243ab30c792c5125d7350150b8b0" gracePeriod=600 Jan 30 15:04:43 crc kubenswrapper[4793]: I0130 15:04:43.241544 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="c38987640cf280e4c02e580e84a0e7564fa5243ab30c792c5125d7350150b8b0" exitCode=0 Jan 30 15:04:43 crc kubenswrapper[4793]: I0130 15:04:43.241634 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"c38987640cf280e4c02e580e84a0e7564fa5243ab30c792c5125d7350150b8b0"} Jan 30 15:04:43 crc kubenswrapper[4793]: I0130 15:04:43.241944 4793 scope.go:117] "RemoveContainer" containerID="01ef481ba079151507351bdcc19c56fd2070b7aabf12655645ab4040de047552" Jan 30 15:04:44 crc kubenswrapper[4793]: I0130 15:04:44.257333 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71"} Jan 30 15:07:12 crc kubenswrapper[4793]: I0130 15:07:12.413206 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:07:12 crc kubenswrapper[4793]: I0130 15:07:12.413788 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.836006 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wrdg9"] Jan 30 15:07:14 crc kubenswrapper[4793]: E0130 15:07:14.836559 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerName="extract-content" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.836577 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerName="extract-content" Jan 30 15:07:14 crc kubenswrapper[4793]: E0130 15:07:14.836601 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerName="extract-utilities" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.836609 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerName="extract-utilities" Jan 30 15:07:14 crc kubenswrapper[4793]: E0130 15:07:14.836642 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerName="registry-server" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.836654 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerName="registry-server" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.836898 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a5ddb3-fbef-4413-a123-81ea7ce9adf7" containerName="registry-server" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.838764 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.860028 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wrdg9"] Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.926211 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-utilities\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.926570 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4z6n\" (UniqueName: \"kubernetes.io/projected/eaa7e68a-f5c8-4492-b539-96fff099748d-kube-api-access-l4z6n\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:14 crc kubenswrapper[4793]: I0130 15:07:14.926687 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-catalog-content\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.029085 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4z6n\" (UniqueName: \"kubernetes.io/projected/eaa7e68a-f5c8-4492-b539-96fff099748d-kube-api-access-l4z6n\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.029158 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-catalog-content\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.029223 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-utilities\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.029753 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-utilities\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.030395 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-catalog-content\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.051799 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4z6n\" (UniqueName: \"kubernetes.io/projected/eaa7e68a-f5c8-4492-b539-96fff099748d-kube-api-access-l4z6n\") pod \"community-operators-wrdg9\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.157800 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.811888 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wrdg9"] Jan 30 15:07:15 crc kubenswrapper[4793]: I0130 15:07:15.999066 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrdg9" event={"ID":"eaa7e68a-f5c8-4492-b539-96fff099748d","Type":"ContainerStarted","Data":"1a35d3069eab5b35988d959a7b47b8631a96e9d363d3e40d680c3b80be285bba"} Jan 30 15:07:17 crc kubenswrapper[4793]: I0130 15:07:17.009707 4793 generic.go:334] "Generic (PLEG): container finished" podID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerID="2e68fc094c6474084a00ace7a1343c3281487ac0b42f6c0f86c4ce491d8395ce" exitCode=0 Jan 30 15:07:17 crc kubenswrapper[4793]: I0130 15:07:17.009756 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrdg9" event={"ID":"eaa7e68a-f5c8-4492-b539-96fff099748d","Type":"ContainerDied","Data":"2e68fc094c6474084a00ace7a1343c3281487ac0b42f6c0f86c4ce491d8395ce"} Jan 30 15:07:17 crc kubenswrapper[4793]: I0130 15:07:17.011972 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 15:07:20 crc kubenswrapper[4793]: I0130 15:07:20.043951 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrdg9" event={"ID":"eaa7e68a-f5c8-4492-b539-96fff099748d","Type":"ContainerStarted","Data":"4ac9e4de050e07af6f6a3d4ab7b9515ece2210c422a53f0f5e0a00047769d72b"} Jan 30 15:07:23 crc kubenswrapper[4793]: I0130 15:07:23.076764 4793 generic.go:334] "Generic (PLEG): container finished" podID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerID="4ac9e4de050e07af6f6a3d4ab7b9515ece2210c422a53f0f5e0a00047769d72b" exitCode=0 Jan 30 15:07:23 crc kubenswrapper[4793]: I0130 15:07:23.076834 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrdg9" event={"ID":"eaa7e68a-f5c8-4492-b539-96fff099748d","Type":"ContainerDied","Data":"4ac9e4de050e07af6f6a3d4ab7b9515ece2210c422a53f0f5e0a00047769d72b"} Jan 30 15:07:28 crc kubenswrapper[4793]: I0130 15:07:28.126840 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrdg9" event={"ID":"eaa7e68a-f5c8-4492-b539-96fff099748d","Type":"ContainerStarted","Data":"d6973b535c9ecb060763fdccd1de889c01aef82d5985f11c0ff82c0869318f33"} Jan 30 15:07:28 crc kubenswrapper[4793]: I0130 15:07:28.155146 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wrdg9" podStartSLOduration=3.954083299 podStartE2EDuration="14.155126487s" podCreationTimestamp="2026-01-30 15:07:14 +0000 UTC" firstStartedPulling="2026-01-30 15:07:17.01166545 +0000 UTC m=+5047.713013941" lastFinishedPulling="2026-01-30 15:07:27.212708638 +0000 UTC m=+5057.914057129" observedRunningTime="2026-01-30 15:07:28.149458887 +0000 UTC m=+5058.850807398" watchObservedRunningTime="2026-01-30 15:07:28.155126487 +0000 UTC m=+5058.856474998" Jan 30 15:07:35 crc kubenswrapper[4793]: I0130 15:07:35.159888 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:35 crc kubenswrapper[4793]: I0130 15:07:35.160406 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:35 crc kubenswrapper[4793]: I0130 15:07:35.212452 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:35 crc kubenswrapper[4793]: I0130 15:07:35.302930 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:35 crc kubenswrapper[4793]: I0130 15:07:35.447064 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wrdg9"] Jan 30 15:07:37 crc kubenswrapper[4793]: I0130 15:07:37.271592 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wrdg9" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerName="registry-server" containerID="cri-o://d6973b535c9ecb060763fdccd1de889c01aef82d5985f11c0ff82c0869318f33" gracePeriod=2 Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.286454 4793 generic.go:334] "Generic (PLEG): container finished" podID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerID="d6973b535c9ecb060763fdccd1de889c01aef82d5985f11c0ff82c0869318f33" exitCode=0 Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.286647 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrdg9" event={"ID":"eaa7e68a-f5c8-4492-b539-96fff099748d","Type":"ContainerDied","Data":"d6973b535c9ecb060763fdccd1de889c01aef82d5985f11c0ff82c0869318f33"} Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.288363 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrdg9" event={"ID":"eaa7e68a-f5c8-4492-b539-96fff099748d","Type":"ContainerDied","Data":"1a35d3069eab5b35988d959a7b47b8631a96e9d363d3e40d680c3b80be285bba"} Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.288398 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a35d3069eab5b35988d959a7b47b8631a96e9d363d3e40d680c3b80be285bba" Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.344889 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.442583 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-catalog-content\") pod \"eaa7e68a-f5c8-4492-b539-96fff099748d\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.442636 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4z6n\" (UniqueName: \"kubernetes.io/projected/eaa7e68a-f5c8-4492-b539-96fff099748d-kube-api-access-l4z6n\") pod \"eaa7e68a-f5c8-4492-b539-96fff099748d\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.442677 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-utilities\") pod \"eaa7e68a-f5c8-4492-b539-96fff099748d\" (UID: \"eaa7e68a-f5c8-4492-b539-96fff099748d\") " Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.443637 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-utilities" (OuterVolumeSpecName: "utilities") pod "eaa7e68a-f5c8-4492-b539-96fff099748d" (UID: "eaa7e68a-f5c8-4492-b539-96fff099748d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.454436 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa7e68a-f5c8-4492-b539-96fff099748d-kube-api-access-l4z6n" (OuterVolumeSpecName: "kube-api-access-l4z6n") pod "eaa7e68a-f5c8-4492-b539-96fff099748d" (UID: "eaa7e68a-f5c8-4492-b539-96fff099748d"). InnerVolumeSpecName "kube-api-access-l4z6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.499413 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eaa7e68a-f5c8-4492-b539-96fff099748d" (UID: "eaa7e68a-f5c8-4492-b539-96fff099748d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.544860 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.544912 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4z6n\" (UniqueName: \"kubernetes.io/projected/eaa7e68a-f5c8-4492-b539-96fff099748d-kube-api-access-l4z6n\") on node \"crc\" DevicePath \"\"" Jan 30 15:07:38 crc kubenswrapper[4793]: I0130 15:07:38.544927 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa7e68a-f5c8-4492-b539-96fff099748d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:07:39 crc kubenswrapper[4793]: I0130 15:07:39.295164 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrdg9" Jan 30 15:07:39 crc kubenswrapper[4793]: I0130 15:07:39.327588 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wrdg9"] Jan 30 15:07:39 crc kubenswrapper[4793]: I0130 15:07:39.335661 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wrdg9"] Jan 30 15:07:40 crc kubenswrapper[4793]: I0130 15:07:40.410443 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" path="/var/lib/kubelet/pods/eaa7e68a-f5c8-4492-b539-96fff099748d/volumes" Jan 30 15:07:42 crc kubenswrapper[4793]: I0130 15:07:42.413994 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:07:42 crc kubenswrapper[4793]: I0130 15:07:42.414321 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.414131 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.414667 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.414709 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.415465 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.415513 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" gracePeriod=600 Jan 30 15:08:12 crc kubenswrapper[4793]: E0130 15:08:12.538822 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.608320 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" exitCode=0 Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.608415 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71"} Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.608709 4793 scope.go:117] "RemoveContainer" containerID="c38987640cf280e4c02e580e84a0e7564fa5243ab30c792c5125d7350150b8b0" Jan 30 15:08:12 crc kubenswrapper[4793]: I0130 15:08:12.609662 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:08:12 crc kubenswrapper[4793]: E0130 15:08:12.610141 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:08:27 crc kubenswrapper[4793]: I0130 15:08:27.398748 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:08:27 crc kubenswrapper[4793]: E0130 15:08:27.399456 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:08:42 crc kubenswrapper[4793]: I0130 15:08:42.399093 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:08:42 crc kubenswrapper[4793]: E0130 15:08:42.399844 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:08:53 crc kubenswrapper[4793]: I0130 15:08:53.397921 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:08:53 crc kubenswrapper[4793]: E0130 15:08:53.399765 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:09:03 crc kubenswrapper[4793]: I0130 15:09:03.106670 4793 generic.go:334] "Generic (PLEG): container finished" podID="4bf53e2d-d024-4526-ada2-0ee6b461babb" containerID="d89fe0491771c7c6f955e91e1925c9e0d02dd442783163c9438dbd9b02ce47d9" exitCode=0 Jan 30 15:09:03 crc kubenswrapper[4793]: I0130 15:09:03.106791 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4bf53e2d-d024-4526-ada2-0ee6b461babb","Type":"ContainerDied","Data":"d89fe0491771c7c6f955e91e1925c9e0d02dd442783163c9438dbd9b02ce47d9"} Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.465466 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.533609 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ssh-key\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.534772 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-temporary\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.534945 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-579bt\" (UniqueName: \"kubernetes.io/projected/4bf53e2d-d024-4526-ada2-0ee6b461babb-kube-api-access-579bt\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.535066 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ca-certs\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.535157 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config-secret\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.535268 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-workdir\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.535346 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.535410 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-config-data\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.535550 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"4bf53e2d-d024-4526-ada2-0ee6b461babb\" (UID: \"4bf53e2d-d024-4526-ada2-0ee6b461babb\") " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.540111 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.540779 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-config-data" (OuterVolumeSpecName: "config-data") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.541571 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.542720 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.547228 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf53e2d-d024-4526-ada2-0ee6b461babb-kube-api-access-579bt" (OuterVolumeSpecName: "kube-api-access-579bt") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "kube-api-access-579bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.592665 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.601758 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.622067 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.626969 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "4bf53e2d-d024-4526-ada2-0ee6b461babb" (UID: "4bf53e2d-d024-4526-ada2-0ee6b461babb"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.637158 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-579bt\" (UniqueName: \"kubernetes.io/projected/4bf53e2d-d024-4526-ada2-0ee6b461babb-kube-api-access-579bt\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.637277 4793 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.637340 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.637397 4793 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.637450 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.637499 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bf53e2d-d024-4526-ada2-0ee6b461babb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.639695 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.639777 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bf53e2d-d024-4526-ada2-0ee6b461babb-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.639831 4793 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4bf53e2d-d024-4526-ada2-0ee6b461babb-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.668285 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 30 15:09:04 crc kubenswrapper[4793]: I0130 15:09:04.741584 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 30 15:09:05 crc kubenswrapper[4793]: I0130 15:09:05.124322 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4bf53e2d-d024-4526-ada2-0ee6b461babb","Type":"ContainerDied","Data":"55c6a2b8062403d0e3d82dc5615fa6326ff29a1fce4fe5257e5d197c6f2071cb"} Jan 30 15:09:05 crc kubenswrapper[4793]: I0130 15:09:05.124407 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 15:09:05 crc kubenswrapper[4793]: I0130 15:09:05.124413 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55c6a2b8062403d0e3d82dc5615fa6326ff29a1fce4fe5257e5d197c6f2071cb" Jan 30 15:09:06 crc kubenswrapper[4793]: I0130 15:09:06.398298 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:09:06 crc kubenswrapper[4793]: E0130 15:09:06.398836 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.378280 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 15:09:08 crc kubenswrapper[4793]: E0130 15:09:08.378914 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerName="extract-content" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.378925 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerName="extract-content" Jan 30 15:09:08 crc kubenswrapper[4793]: E0130 15:09:08.378945 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerName="registry-server" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.378951 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerName="registry-server" Jan 30 15:09:08 crc kubenswrapper[4793]: E0130 15:09:08.378972 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf53e2d-d024-4526-ada2-0ee6b461babb" containerName="tempest-tests-tempest-tests-runner" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.378978 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf53e2d-d024-4526-ada2-0ee6b461babb" containerName="tempest-tests-tempest-tests-runner" Jan 30 15:09:08 crc kubenswrapper[4793]: E0130 15:09:08.378989 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerName="extract-utilities" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.378995 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerName="extract-utilities" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.379175 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf53e2d-d024-4526-ada2-0ee6b461babb" containerName="tempest-tests-tempest-tests-runner" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.379206 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa7e68a-f5c8-4492-b539-96fff099748d" containerName="registry-server" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.379754 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.383037 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9sb9w" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.394219 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.518337 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.518429 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9vjt\" (UniqueName: \"kubernetes.io/projected/8de9d25e-7ca7-4338-a64e-ed95f7bd9de9-kube-api-access-q9vjt\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.620353 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9vjt\" (UniqueName: \"kubernetes.io/projected/8de9d25e-7ca7-4338-a64e-ed95f7bd9de9-kube-api-access-q9vjt\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.620559 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.621619 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.728831 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9vjt\" (UniqueName: \"kubernetes.io/projected/8de9d25e-7ca7-4338-a64e-ed95f7bd9de9-kube-api-access-q9vjt\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:08 crc kubenswrapper[4793]: I0130 15:09:08.754725 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:09 crc kubenswrapper[4793]: I0130 15:09:09.003125 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 15:09:09 crc kubenswrapper[4793]: I0130 15:09:09.458927 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 15:09:10 crc kubenswrapper[4793]: I0130 15:09:10.167763 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9","Type":"ContainerStarted","Data":"53a2b61bee4c7b8809c505a69704f25fbea86304433e8ac7ac5e69b5e4937279"} Jan 30 15:09:11 crc kubenswrapper[4793]: I0130 15:09:11.178006 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"8de9d25e-7ca7-4338-a64e-ed95f7bd9de9","Type":"ContainerStarted","Data":"c96fca4660b587eb60d3db2372a00d54e6b15e06f8daa20132280faca28efaed"} Jan 30 15:09:11 crc kubenswrapper[4793]: I0130 15:09:11.195310 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.9574385859999999 podStartE2EDuration="3.195292473s" podCreationTimestamp="2026-01-30 15:09:08 +0000 UTC" firstStartedPulling="2026-01-30 15:09:09.478961989 +0000 UTC m=+5160.180310480" lastFinishedPulling="2026-01-30 15:09:10.716815876 +0000 UTC m=+5161.418164367" observedRunningTime="2026-01-30 15:09:11.194774051 +0000 UTC m=+5161.896122562" watchObservedRunningTime="2026-01-30 15:09:11.195292473 +0000 UTC m=+5161.896640984" Jan 30 15:09:18 crc kubenswrapper[4793]: I0130 15:09:18.398626 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:09:18 crc kubenswrapper[4793]: E0130 15:09:18.399307 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:09:33 crc kubenswrapper[4793]: I0130 15:09:33.398971 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:09:33 crc kubenswrapper[4793]: E0130 15:09:33.401034 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.205444 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jg6df/must-gather-x5n45"] Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.214154 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.216936 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jg6df"/"kube-root-ca.crt" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.217185 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jg6df"/"default-dockercfg-lqjtp" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.217259 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jg6df"/"openshift-service-ca.crt" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.353302 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvqx6\" (UniqueName: \"kubernetes.io/projected/9cdbb05e-d475-48b2-9b59-297532883826-kube-api-access-nvqx6\") pod \"must-gather-x5n45\" (UID: \"9cdbb05e-d475-48b2-9b59-297532883826\") " pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.353638 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9cdbb05e-d475-48b2-9b59-297532883826-must-gather-output\") pod \"must-gather-x5n45\" (UID: \"9cdbb05e-d475-48b2-9b59-297532883826\") " pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.392390 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jg6df/must-gather-x5n45"] Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.455253 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvqx6\" (UniqueName: \"kubernetes.io/projected/9cdbb05e-d475-48b2-9b59-297532883826-kube-api-access-nvqx6\") pod \"must-gather-x5n45\" (UID: \"9cdbb05e-d475-48b2-9b59-297532883826\") " pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.455348 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9cdbb05e-d475-48b2-9b59-297532883826-must-gather-output\") pod \"must-gather-x5n45\" (UID: \"9cdbb05e-d475-48b2-9b59-297532883826\") " pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.455922 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9cdbb05e-d475-48b2-9b59-297532883826-must-gather-output\") pod \"must-gather-x5n45\" (UID: \"9cdbb05e-d475-48b2-9b59-297532883826\") " pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.485780 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvqx6\" (UniqueName: \"kubernetes.io/projected/9cdbb05e-d475-48b2-9b59-297532883826-kube-api-access-nvqx6\") pod \"must-gather-x5n45\" (UID: \"9cdbb05e-d475-48b2-9b59-297532883826\") " pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:09:34 crc kubenswrapper[4793]: I0130 15:09:34.544857 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:09:35 crc kubenswrapper[4793]: I0130 15:09:35.062144 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jg6df/must-gather-x5n45"] Jan 30 15:09:35 crc kubenswrapper[4793]: W0130 15:09:35.064320 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cdbb05e_d475_48b2_9b59_297532883826.slice/crio-92990bc991275785f1929ebbaa37c8f3adafb18828a0999f23e0277513cd18fe WatchSource:0}: Error finding container 92990bc991275785f1929ebbaa37c8f3adafb18828a0999f23e0277513cd18fe: Status 404 returned error can't find the container with id 92990bc991275785f1929ebbaa37c8f3adafb18828a0999f23e0277513cd18fe Jan 30 15:09:35 crc kubenswrapper[4793]: I0130 15:09:35.399163 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/must-gather-x5n45" event={"ID":"9cdbb05e-d475-48b2-9b59-297532883826","Type":"ContainerStarted","Data":"92990bc991275785f1929ebbaa37c8f3adafb18828a0999f23e0277513cd18fe"} Jan 30 15:09:45 crc kubenswrapper[4793]: I0130 15:09:45.398900 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:09:45 crc kubenswrapper[4793]: E0130 15:09:45.404317 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:09:49 crc kubenswrapper[4793]: I0130 15:09:49.546094 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/must-gather-x5n45" event={"ID":"9cdbb05e-d475-48b2-9b59-297532883826","Type":"ContainerStarted","Data":"ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf"} Jan 30 15:09:52 crc kubenswrapper[4793]: I0130 15:09:52.575849 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/must-gather-x5n45" event={"ID":"9cdbb05e-d475-48b2-9b59-297532883826","Type":"ContainerStarted","Data":"4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b"} Jan 30 15:09:52 crc kubenswrapper[4793]: I0130 15:09:52.599735 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jg6df/must-gather-x5n45" podStartSLOduration=4.531392122 podStartE2EDuration="18.599718094s" podCreationTimestamp="2026-01-30 15:09:34 +0000 UTC" firstStartedPulling="2026-01-30 15:09:35.066237271 +0000 UTC m=+5185.767585762" lastFinishedPulling="2026-01-30 15:09:49.134563243 +0000 UTC m=+5199.835911734" observedRunningTime="2026-01-30 15:09:52.590185759 +0000 UTC m=+5203.291534260" watchObservedRunningTime="2026-01-30 15:09:52.599718094 +0000 UTC m=+5203.301066585" Jan 30 15:09:56 crc kubenswrapper[4793]: E0130 15:09:56.606858 4793 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.2:51862->38.102.83.2:36591: write tcp 38.102.83.2:51862->38.102.83.2:36591: write: broken pipe Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.658414 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jg6df/crc-debug-2g87h"] Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.659970 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.734512 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87c6n\" (UniqueName: \"kubernetes.io/projected/e91a73e1-11d2-483f-b279-af21dd483350-kube-api-access-87c6n\") pod \"crc-debug-2g87h\" (UID: \"e91a73e1-11d2-483f-b279-af21dd483350\") " pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.734758 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e91a73e1-11d2-483f-b279-af21dd483350-host\") pod \"crc-debug-2g87h\" (UID: \"e91a73e1-11d2-483f-b279-af21dd483350\") " pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.836763 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87c6n\" (UniqueName: \"kubernetes.io/projected/e91a73e1-11d2-483f-b279-af21dd483350-kube-api-access-87c6n\") pod \"crc-debug-2g87h\" (UID: \"e91a73e1-11d2-483f-b279-af21dd483350\") " pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.836867 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e91a73e1-11d2-483f-b279-af21dd483350-host\") pod \"crc-debug-2g87h\" (UID: \"e91a73e1-11d2-483f-b279-af21dd483350\") " pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.836962 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e91a73e1-11d2-483f-b279-af21dd483350-host\") pod \"crc-debug-2g87h\" (UID: \"e91a73e1-11d2-483f-b279-af21dd483350\") " pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.860831 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87c6n\" (UniqueName: \"kubernetes.io/projected/e91a73e1-11d2-483f-b279-af21dd483350-kube-api-access-87c6n\") pod \"crc-debug-2g87h\" (UID: \"e91a73e1-11d2-483f-b279-af21dd483350\") " pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:09:57 crc kubenswrapper[4793]: I0130 15:09:57.978769 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:09:58 crc kubenswrapper[4793]: I0130 15:09:58.639268 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/crc-debug-2g87h" event={"ID":"e91a73e1-11d2-483f-b279-af21dd483350","Type":"ContainerStarted","Data":"4781b9ae2f920b71223792d335627b59adabaf76e90902cdd7e6c060633fa2cf"} Jan 30 15:09:59 crc kubenswrapper[4793]: I0130 15:09:59.398067 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:09:59 crc kubenswrapper[4793]: E0130 15:09:59.398579 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:10:10 crc kubenswrapper[4793]: I0130 15:10:10.410267 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:10:10 crc kubenswrapper[4793]: E0130 15:10:10.410912 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:10:11 crc kubenswrapper[4793]: I0130 15:10:11.786514 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/crc-debug-2g87h" event={"ID":"e91a73e1-11d2-483f-b279-af21dd483350","Type":"ContainerStarted","Data":"cc41eecc94295c98eb3214210729f1c635aad07b9ddd5ced865321fef6013a0f"} Jan 30 15:10:11 crc kubenswrapper[4793]: I0130 15:10:11.812010 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jg6df/crc-debug-2g87h" podStartSLOduration=1.555893357 podStartE2EDuration="14.811988976s" podCreationTimestamp="2026-01-30 15:09:57 +0000 UTC" firstStartedPulling="2026-01-30 15:09:58.039084824 +0000 UTC m=+5208.740433315" lastFinishedPulling="2026-01-30 15:10:11.295180443 +0000 UTC m=+5221.996528934" observedRunningTime="2026-01-30 15:10:11.800226515 +0000 UTC m=+5222.501575006" watchObservedRunningTime="2026-01-30 15:10:11.811988976 +0000 UTC m=+5222.513337467" Jan 30 15:10:21 crc kubenswrapper[4793]: I0130 15:10:21.398311 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:10:21 crc kubenswrapper[4793]: E0130 15:10:21.399128 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:10:32 crc kubenswrapper[4793]: I0130 15:10:32.401183 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:10:32 crc kubenswrapper[4793]: E0130 15:10:32.401952 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:10:44 crc kubenswrapper[4793]: I0130 15:10:44.398786 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:10:44 crc kubenswrapper[4793]: E0130 15:10:44.399752 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:10:58 crc kubenswrapper[4793]: I0130 15:10:58.398159 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:10:58 crc kubenswrapper[4793]: E0130 15:10:58.398728 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:11:02 crc kubenswrapper[4793]: I0130 15:11:02.990835 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gxqkt"] Jan 30 15:11:02 crc kubenswrapper[4793]: I0130 15:11:02.994945 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.010734 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxqkt"] Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.169305 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9smv\" (UniqueName: \"kubernetes.io/projected/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-kube-api-access-b9smv\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.169349 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-catalog-content\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.169547 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-utilities\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.271605 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9smv\" (UniqueName: \"kubernetes.io/projected/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-kube-api-access-b9smv\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.271690 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-catalog-content\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.272314 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-catalog-content\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.273123 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-utilities\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.274858 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-utilities\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.299937 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9smv\" (UniqueName: \"kubernetes.io/projected/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-kube-api-access-b9smv\") pod \"redhat-marketplace-gxqkt\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:03 crc kubenswrapper[4793]: I0130 15:11:03.376153 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:04 crc kubenswrapper[4793]: I0130 15:11:04.052947 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxqkt"] Jan 30 15:11:04 crc kubenswrapper[4793]: I0130 15:11:04.246270 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxqkt" event={"ID":"efa561ef-e4d7-4893-bec0-ff16ee72f7b8","Type":"ContainerStarted","Data":"d2d8096fc57f1afae2693dd57e7e3fe427947ad7e4989e5dfdc716dfe95f9ff9"} Jan 30 15:11:05 crc kubenswrapper[4793]: I0130 15:11:05.256441 4793 generic.go:334] "Generic (PLEG): container finished" podID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerID="5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb" exitCode=0 Jan 30 15:11:05 crc kubenswrapper[4793]: I0130 15:11:05.256601 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxqkt" event={"ID":"efa561ef-e4d7-4893-bec0-ff16ee72f7b8","Type":"ContainerDied","Data":"5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb"} Jan 30 15:11:06 crc kubenswrapper[4793]: I0130 15:11:06.266835 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxqkt" event={"ID":"efa561ef-e4d7-4893-bec0-ff16ee72f7b8","Type":"ContainerStarted","Data":"839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be"} Jan 30 15:11:08 crc kubenswrapper[4793]: I0130 15:11:08.286019 4793 generic.go:334] "Generic (PLEG): container finished" podID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerID="839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be" exitCode=0 Jan 30 15:11:08 crc kubenswrapper[4793]: I0130 15:11:08.286117 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxqkt" event={"ID":"efa561ef-e4d7-4893-bec0-ff16ee72f7b8","Type":"ContainerDied","Data":"839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be"} Jan 30 15:11:09 crc kubenswrapper[4793]: I0130 15:11:09.313159 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxqkt" event={"ID":"efa561ef-e4d7-4893-bec0-ff16ee72f7b8","Type":"ContainerStarted","Data":"cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a"} Jan 30 15:11:09 crc kubenswrapper[4793]: I0130 15:11:09.344016 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gxqkt" podStartSLOduration=3.8274411219999998 podStartE2EDuration="7.34399046s" podCreationTimestamp="2026-01-30 15:11:02 +0000 UTC" firstStartedPulling="2026-01-30 15:11:05.258637429 +0000 UTC m=+5275.959985920" lastFinishedPulling="2026-01-30 15:11:08.775186767 +0000 UTC m=+5279.476535258" observedRunningTime="2026-01-30 15:11:09.336483464 +0000 UTC m=+5280.037831965" watchObservedRunningTime="2026-01-30 15:11:09.34399046 +0000 UTC m=+5280.045338941" Jan 30 15:11:11 crc kubenswrapper[4793]: I0130 15:11:11.398852 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:11:11 crc kubenswrapper[4793]: E0130 15:11:11.399373 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:11:13 crc kubenswrapper[4793]: I0130 15:11:13.380753 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:13 crc kubenswrapper[4793]: I0130 15:11:13.381037 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:14 crc kubenswrapper[4793]: I0130 15:11:14.430446 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gxqkt" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="registry-server" probeResult="failure" output=< Jan 30 15:11:14 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:11:14 crc kubenswrapper[4793]: > Jan 30 15:11:15 crc kubenswrapper[4793]: I0130 15:11:15.367208 4793 generic.go:334] "Generic (PLEG): container finished" podID="e91a73e1-11d2-483f-b279-af21dd483350" containerID="cc41eecc94295c98eb3214210729f1c635aad07b9ddd5ced865321fef6013a0f" exitCode=0 Jan 30 15:11:15 crc kubenswrapper[4793]: I0130 15:11:15.367312 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/crc-debug-2g87h" event={"ID":"e91a73e1-11d2-483f-b279-af21dd483350","Type":"ContainerDied","Data":"cc41eecc94295c98eb3214210729f1c635aad07b9ddd5ced865321fef6013a0f"} Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.500157 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.540212 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jg6df/crc-debug-2g87h"] Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.549523 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jg6df/crc-debug-2g87h"] Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.610610 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87c6n\" (UniqueName: \"kubernetes.io/projected/e91a73e1-11d2-483f-b279-af21dd483350-kube-api-access-87c6n\") pod \"e91a73e1-11d2-483f-b279-af21dd483350\" (UID: \"e91a73e1-11d2-483f-b279-af21dd483350\") " Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.610697 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e91a73e1-11d2-483f-b279-af21dd483350-host\") pod \"e91a73e1-11d2-483f-b279-af21dd483350\" (UID: \"e91a73e1-11d2-483f-b279-af21dd483350\") " Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.611122 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e91a73e1-11d2-483f-b279-af21dd483350-host" (OuterVolumeSpecName: "host") pod "e91a73e1-11d2-483f-b279-af21dd483350" (UID: "e91a73e1-11d2-483f-b279-af21dd483350"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.630191 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e91a73e1-11d2-483f-b279-af21dd483350-kube-api-access-87c6n" (OuterVolumeSpecName: "kube-api-access-87c6n") pod "e91a73e1-11d2-483f-b279-af21dd483350" (UID: "e91a73e1-11d2-483f-b279-af21dd483350"). InnerVolumeSpecName "kube-api-access-87c6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.713567 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87c6n\" (UniqueName: \"kubernetes.io/projected/e91a73e1-11d2-483f-b279-af21dd483350-kube-api-access-87c6n\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:16 crc kubenswrapper[4793]: I0130 15:11:16.713842 4793 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e91a73e1-11d2-483f-b279-af21dd483350-host\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:17 crc kubenswrapper[4793]: I0130 15:11:17.384195 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4781b9ae2f920b71223792d335627b59adabaf76e90902cdd7e6c060633fa2cf" Jan 30 15:11:17 crc kubenswrapper[4793]: I0130 15:11:17.384273 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-2g87h" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.416632 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e91a73e1-11d2-483f-b279-af21dd483350" path="/var/lib/kubelet/pods/e91a73e1-11d2-483f-b279-af21dd483350/volumes" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.626821 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jg6df/crc-debug-948q6"] Jan 30 15:11:18 crc kubenswrapper[4793]: E0130 15:11:18.627326 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e91a73e1-11d2-483f-b279-af21dd483350" containerName="container-00" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.627349 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e91a73e1-11d2-483f-b279-af21dd483350" containerName="container-00" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.627598 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e91a73e1-11d2-483f-b279-af21dd483350" containerName="container-00" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.628429 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.764986 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9819c60-4bee-4eaf-87a4-481aef7f40ba-host\") pod \"crc-debug-948q6\" (UID: \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\") " pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.765293 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvjvq\" (UniqueName: \"kubernetes.io/projected/c9819c60-4bee-4eaf-87a4-481aef7f40ba-kube-api-access-qvjvq\") pod \"crc-debug-948q6\" (UID: \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\") " pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.867451 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9819c60-4bee-4eaf-87a4-481aef7f40ba-host\") pod \"crc-debug-948q6\" (UID: \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\") " pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.867516 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvjvq\" (UniqueName: \"kubernetes.io/projected/c9819c60-4bee-4eaf-87a4-481aef7f40ba-kube-api-access-qvjvq\") pod \"crc-debug-948q6\" (UID: \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\") " pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.867786 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9819c60-4bee-4eaf-87a4-481aef7f40ba-host\") pod \"crc-debug-948q6\" (UID: \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\") " pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.894803 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvjvq\" (UniqueName: \"kubernetes.io/projected/c9819c60-4bee-4eaf-87a4-481aef7f40ba-kube-api-access-qvjvq\") pod \"crc-debug-948q6\" (UID: \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\") " pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:18 crc kubenswrapper[4793]: I0130 15:11:18.947280 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:19 crc kubenswrapper[4793]: I0130 15:11:19.404237 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/crc-debug-948q6" event={"ID":"c9819c60-4bee-4eaf-87a4-481aef7f40ba","Type":"ContainerStarted","Data":"01a90db7be859ecafd810b8a07f2c26755a5394c65fea220431985c5bdccb2d5"} Jan 30 15:11:20 crc kubenswrapper[4793]: I0130 15:11:20.412946 4793 generic.go:334] "Generic (PLEG): container finished" podID="c9819c60-4bee-4eaf-87a4-481aef7f40ba" containerID="568ed0e82f10baad26d3430efb936eb0714fc3fed75c7084e20ef051683db5ff" exitCode=0 Jan 30 15:11:20 crc kubenswrapper[4793]: I0130 15:11:20.413037 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/crc-debug-948q6" event={"ID":"c9819c60-4bee-4eaf-87a4-481aef7f40ba","Type":"ContainerDied","Data":"568ed0e82f10baad26d3430efb936eb0714fc3fed75c7084e20ef051683db5ff"} Jan 30 15:11:21 crc kubenswrapper[4793]: I0130 15:11:21.524764 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:21 crc kubenswrapper[4793]: I0130 15:11:21.621615 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvjvq\" (UniqueName: \"kubernetes.io/projected/c9819c60-4bee-4eaf-87a4-481aef7f40ba-kube-api-access-qvjvq\") pod \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\" (UID: \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\") " Jan 30 15:11:21 crc kubenswrapper[4793]: I0130 15:11:21.621755 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9819c60-4bee-4eaf-87a4-481aef7f40ba-host\") pod \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\" (UID: \"c9819c60-4bee-4eaf-87a4-481aef7f40ba\") " Jan 30 15:11:21 crc kubenswrapper[4793]: I0130 15:11:21.621875 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9819c60-4bee-4eaf-87a4-481aef7f40ba-host" (OuterVolumeSpecName: "host") pod "c9819c60-4bee-4eaf-87a4-481aef7f40ba" (UID: "c9819c60-4bee-4eaf-87a4-481aef7f40ba"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:11:21 crc kubenswrapper[4793]: I0130 15:11:21.622253 4793 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9819c60-4bee-4eaf-87a4-481aef7f40ba-host\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:21 crc kubenswrapper[4793]: I0130 15:11:21.641475 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9819c60-4bee-4eaf-87a4-481aef7f40ba-kube-api-access-qvjvq" (OuterVolumeSpecName: "kube-api-access-qvjvq") pod "c9819c60-4bee-4eaf-87a4-481aef7f40ba" (UID: "c9819c60-4bee-4eaf-87a4-481aef7f40ba"). InnerVolumeSpecName "kube-api-access-qvjvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:11:21 crc kubenswrapper[4793]: I0130 15:11:21.723491 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvjvq\" (UniqueName: \"kubernetes.io/projected/c9819c60-4bee-4eaf-87a4-481aef7f40ba-kube-api-access-qvjvq\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:22 crc kubenswrapper[4793]: I0130 15:11:22.435453 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/crc-debug-948q6" event={"ID":"c9819c60-4bee-4eaf-87a4-481aef7f40ba","Type":"ContainerDied","Data":"01a90db7be859ecafd810b8a07f2c26755a5394c65fea220431985c5bdccb2d5"} Jan 30 15:11:22 crc kubenswrapper[4793]: I0130 15:11:22.435773 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01a90db7be859ecafd810b8a07f2c26755a5394c65fea220431985c5bdccb2d5" Jan 30 15:11:22 crc kubenswrapper[4793]: I0130 15:11:22.435561 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-948q6" Jan 30 15:11:22 crc kubenswrapper[4793]: I0130 15:11:22.458874 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jg6df/crc-debug-948q6"] Jan 30 15:11:22 crc kubenswrapper[4793]: I0130 15:11:22.471797 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jg6df/crc-debug-948q6"] Jan 30 15:11:23 crc kubenswrapper[4793]: I0130 15:11:23.448554 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:23 crc kubenswrapper[4793]: I0130 15:11:23.520440 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:23 crc kubenswrapper[4793]: I0130 15:11:23.687247 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxqkt"] Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.000044 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jg6df/crc-debug-zxkmb"] Jan 30 15:11:24 crc kubenswrapper[4793]: E0130 15:11:24.000575 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9819c60-4bee-4eaf-87a4-481aef7f40ba" containerName="container-00" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.000599 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9819c60-4bee-4eaf-87a4-481aef7f40ba" containerName="container-00" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.000799 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9819c60-4bee-4eaf-87a4-481aef7f40ba" containerName="container-00" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.001522 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.072252 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b52e7a23-6edb-43d6-9726-23c6796194b1-host\") pod \"crc-debug-zxkmb\" (UID: \"b52e7a23-6edb-43d6-9726-23c6796194b1\") " pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.072872 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khntt\" (UniqueName: \"kubernetes.io/projected/b52e7a23-6edb-43d6-9726-23c6796194b1-kube-api-access-khntt\") pod \"crc-debug-zxkmb\" (UID: \"b52e7a23-6edb-43d6-9726-23c6796194b1\") " pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.175084 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khntt\" (UniqueName: \"kubernetes.io/projected/b52e7a23-6edb-43d6-9726-23c6796194b1-kube-api-access-khntt\") pod \"crc-debug-zxkmb\" (UID: \"b52e7a23-6edb-43d6-9726-23c6796194b1\") " pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.175158 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b52e7a23-6edb-43d6-9726-23c6796194b1-host\") pod \"crc-debug-zxkmb\" (UID: \"b52e7a23-6edb-43d6-9726-23c6796194b1\") " pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.175334 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b52e7a23-6edb-43d6-9726-23c6796194b1-host\") pod \"crc-debug-zxkmb\" (UID: \"b52e7a23-6edb-43d6-9726-23c6796194b1\") " pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.205426 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khntt\" (UniqueName: \"kubernetes.io/projected/b52e7a23-6edb-43d6-9726-23c6796194b1-kube-api-access-khntt\") pod \"crc-debug-zxkmb\" (UID: \"b52e7a23-6edb-43d6-9726-23c6796194b1\") " pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.320713 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.398769 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:11:24 crc kubenswrapper[4793]: E0130 15:11:24.399232 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.407832 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9819c60-4bee-4eaf-87a4-481aef7f40ba" path="/var/lib/kubelet/pods/c9819c60-4bee-4eaf-87a4-481aef7f40ba/volumes" Jan 30 15:11:24 crc kubenswrapper[4793]: I0130 15:11:24.454143 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/crc-debug-zxkmb" event={"ID":"b52e7a23-6edb-43d6-9726-23c6796194b1","Type":"ContainerStarted","Data":"f9effb3adf3233c9f76a3e2b64981a874f3a34cd1f5b88e2d7a0cc3eb50c85fd"} Jan 30 15:11:25 crc kubenswrapper[4793]: I0130 15:11:25.465012 4793 generic.go:334] "Generic (PLEG): container finished" podID="b52e7a23-6edb-43d6-9726-23c6796194b1" containerID="c72a517fa26537db3ff3b91d8b7910984b9b712d451f95ae207c6331a56c555b" exitCode=0 Jan 30 15:11:25 crc kubenswrapper[4793]: I0130 15:11:25.465095 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/crc-debug-zxkmb" event={"ID":"b52e7a23-6edb-43d6-9726-23c6796194b1","Type":"ContainerDied","Data":"c72a517fa26537db3ff3b91d8b7910984b9b712d451f95ae207c6331a56c555b"} Jan 30 15:11:25 crc kubenswrapper[4793]: I0130 15:11:25.465304 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gxqkt" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="registry-server" containerID="cri-o://cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a" gracePeriod=2 Jan 30 15:11:25 crc kubenswrapper[4793]: I0130 15:11:25.522619 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jg6df/crc-debug-zxkmb"] Jan 30 15:11:25 crc kubenswrapper[4793]: I0130 15:11:25.533650 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jg6df/crc-debug-zxkmb"] Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.029381 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.110113 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-catalog-content\") pod \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.110607 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-utilities\") pod \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.110821 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9smv\" (UniqueName: \"kubernetes.io/projected/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-kube-api-access-b9smv\") pod \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\" (UID: \"efa561ef-e4d7-4893-bec0-ff16ee72f7b8\") " Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.111149 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-utilities" (OuterVolumeSpecName: "utilities") pod "efa561ef-e4d7-4893-bec0-ff16ee72f7b8" (UID: "efa561ef-e4d7-4893-bec0-ff16ee72f7b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.113175 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.126788 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-kube-api-access-b9smv" (OuterVolumeSpecName: "kube-api-access-b9smv") pod "efa561ef-e4d7-4893-bec0-ff16ee72f7b8" (UID: "efa561ef-e4d7-4893-bec0-ff16ee72f7b8"). InnerVolumeSpecName "kube-api-access-b9smv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.148009 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efa561ef-e4d7-4893-bec0-ff16ee72f7b8" (UID: "efa561ef-e4d7-4893-bec0-ff16ee72f7b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.215751 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.215790 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9smv\" (UniqueName: \"kubernetes.io/projected/efa561ef-e4d7-4893-bec0-ff16ee72f7b8-kube-api-access-b9smv\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.475410 4793 generic.go:334] "Generic (PLEG): container finished" podID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerID="cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a" exitCode=0 Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.475471 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxqkt" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.475481 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxqkt" event={"ID":"efa561ef-e4d7-4893-bec0-ff16ee72f7b8","Type":"ContainerDied","Data":"cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a"} Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.475836 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxqkt" event={"ID":"efa561ef-e4d7-4893-bec0-ff16ee72f7b8","Type":"ContainerDied","Data":"d2d8096fc57f1afae2693dd57e7e3fe427947ad7e4989e5dfdc716dfe95f9ff9"} Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.475860 4793 scope.go:117] "RemoveContainer" containerID="cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.548899 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.549246 4793 scope.go:117] "RemoveContainer" containerID="839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.561965 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxqkt"] Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.568714 4793 scope.go:117] "RemoveContainer" containerID="5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.576841 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxqkt"] Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.622010 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khntt\" (UniqueName: \"kubernetes.io/projected/b52e7a23-6edb-43d6-9726-23c6796194b1-kube-api-access-khntt\") pod \"b52e7a23-6edb-43d6-9726-23c6796194b1\" (UID: \"b52e7a23-6edb-43d6-9726-23c6796194b1\") " Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.622138 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b52e7a23-6edb-43d6-9726-23c6796194b1-host\") pod \"b52e7a23-6edb-43d6-9726-23c6796194b1\" (UID: \"b52e7a23-6edb-43d6-9726-23c6796194b1\") " Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.622706 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b52e7a23-6edb-43d6-9726-23c6796194b1-host" (OuterVolumeSpecName: "host") pod "b52e7a23-6edb-43d6-9726-23c6796194b1" (UID: "b52e7a23-6edb-43d6-9726-23c6796194b1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.626731 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b52e7a23-6edb-43d6-9726-23c6796194b1-kube-api-access-khntt" (OuterVolumeSpecName: "kube-api-access-khntt") pod "b52e7a23-6edb-43d6-9726-23c6796194b1" (UID: "b52e7a23-6edb-43d6-9726-23c6796194b1"). InnerVolumeSpecName "kube-api-access-khntt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.630940 4793 scope.go:117] "RemoveContainer" containerID="cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a" Jan 30 15:11:26 crc kubenswrapper[4793]: E0130 15:11:26.632219 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a\": container with ID starting with cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a not found: ID does not exist" containerID="cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.632354 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a"} err="failed to get container status \"cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a\": rpc error: code = NotFound desc = could not find container \"cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a\": container with ID starting with cac0c787bce0adf73cfb747526051a186f75984c1e97a19ef8d11e38d1a15c1a not found: ID does not exist" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.632462 4793 scope.go:117] "RemoveContainer" containerID="839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be" Jan 30 15:11:26 crc kubenswrapper[4793]: E0130 15:11:26.633642 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be\": container with ID starting with 839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be not found: ID does not exist" containerID="839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.633707 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be"} err="failed to get container status \"839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be\": rpc error: code = NotFound desc = could not find container \"839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be\": container with ID starting with 839d5f4fe1ed04e9cce7ed031cf3cb7ffab443833677eb673f98f335c14f29be not found: ID does not exist" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.633745 4793 scope.go:117] "RemoveContainer" containerID="5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb" Jan 30 15:11:26 crc kubenswrapper[4793]: E0130 15:11:26.634225 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb\": container with ID starting with 5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb not found: ID does not exist" containerID="5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.634345 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb"} err="failed to get container status \"5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb\": rpc error: code = NotFound desc = could not find container \"5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb\": container with ID starting with 5f5e157ce39712247982ffaa75423f76acbaf05c30e8d8424e7d155be42399eb not found: ID does not exist" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.724826 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khntt\" (UniqueName: \"kubernetes.io/projected/b52e7a23-6edb-43d6-9726-23c6796194b1-kube-api-access-khntt\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:26 crc kubenswrapper[4793]: I0130 15:11:26.724857 4793 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b52e7a23-6edb-43d6-9726-23c6796194b1-host\") on node \"crc\" DevicePath \"\"" Jan 30 15:11:27 crc kubenswrapper[4793]: I0130 15:11:27.491206 4793 scope.go:117] "RemoveContainer" containerID="c72a517fa26537db3ff3b91d8b7910984b9b712d451f95ae207c6331a56c555b" Jan 30 15:11:27 crc kubenswrapper[4793]: I0130 15:11:27.491256 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/crc-debug-zxkmb" Jan 30 15:11:28 crc kubenswrapper[4793]: I0130 15:11:28.408655 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b52e7a23-6edb-43d6-9726-23c6796194b1" path="/var/lib/kubelet/pods/b52e7a23-6edb-43d6-9726-23c6796194b1/volumes" Jan 30 15:11:28 crc kubenswrapper[4793]: I0130 15:11:28.409454 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" path="/var/lib/kubelet/pods/efa561ef-e4d7-4893-bec0-ff16ee72f7b8/volumes" Jan 30 15:11:38 crc kubenswrapper[4793]: I0130 15:11:38.397780 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:11:38 crc kubenswrapper[4793]: E0130 15:11:38.398772 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:11:47 crc kubenswrapper[4793]: I0130 15:11:47.760708 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-577797dd7d-dhrt2_a389d76c-e0de-4b8d-84b2-82aedd050f7f/barbican-api/0.log" Jan 30 15:11:47 crc kubenswrapper[4793]: I0130 15:11:47.910204 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-577797dd7d-dhrt2_a389d76c-e0de-4b8d-84b2-82aedd050f7f/barbican-api-log/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.019680 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6dd7f7f8-htnvl_af929740-592b-4d7f-9c99-061df6882206/barbican-keystone-listener/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.042222 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6dd7f7f8-htnvl_af929740-592b-4d7f-9c99-061df6882206/barbican-keystone-listener-log/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.249007 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-d78d76787-7f5jh_653cedf2-2880-49ff-b177-8974b9f0ecdf/barbican-worker/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.327419 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-d78d76787-7f5jh_653cedf2-2880-49ff-b177-8974b9f0ecdf/barbican-worker-log/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.492609 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6_2ba6b544-0042-43d7-abe9-bc40439f804b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.643723 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d/ceilometer-notification-agent/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.655884 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d/ceilometer-central-agent/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.778636 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d/sg-core/0.log" Jan 30 15:11:48 crc kubenswrapper[4793]: I0130 15:11:48.791378 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d/proxy-httpd/0.log" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.018620 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3105dc9e-c178-4799-a658-044d4d9b8312/cinder-api/0.log" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.042947 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3105dc9e-c178-4799-a658-044d4d9b8312/cinder-api-log/0.log" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.208537 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_83e26b73-5483-4b6c-88cd-5d794f14ef5a/cinder-scheduler/0.log" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.325492 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_83e26b73-5483-4b6c-88cd-5d794f14ef5a/probe/0.log" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.369292 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc_260f1ea9-6ba5-40aa-ab56-e95237cb1009/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.398540 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:11:49 crc kubenswrapper[4793]: E0130 15:11:49.398845 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.576744 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-5bm62_b3e8eb28-c303-409b-a89b-b273b2f56fff/init/0.log" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.665307 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-jchk2_44f4e8fd-4511-4670-944a-e37dfc6238c8/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:49 crc kubenswrapper[4793]: I0130 15:11:49.985213 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-5bm62_b3e8eb28-c303-409b-a89b-b273b2f56fff/init/0.log" Jan 30 15:11:50 crc kubenswrapper[4793]: I0130 15:11:50.088727 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-qgztn_f1632f4b-e0e5-4069-a77b-ae4f1911869b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:50 crc kubenswrapper[4793]: I0130 15:11:50.172293 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-5bm62_b3e8eb28-c303-409b-a89b-b273b2f56fff/dnsmasq-dns/0.log" Jan 30 15:11:50 crc kubenswrapper[4793]: I0130 15:11:50.272547 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ae7d1df8-4b0f-46f7-85f4-e24fd65a919d/glance-httpd/0.log" Jan 30 15:11:50 crc kubenswrapper[4793]: I0130 15:11:50.340688 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ae7d1df8-4b0f-46f7-85f4-e24fd65a919d/glance-log/0.log" Jan 30 15:11:50 crc kubenswrapper[4793]: I0130 15:11:50.659958 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f96d1ae8-18a5-4651-b460-21e9ddb50684/glance-log/0.log" Jan 30 15:11:50 crc kubenswrapper[4793]: I0130 15:11:50.665781 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f96d1ae8-18a5-4651-b460-21e9ddb50684/glance-httpd/0.log" Jan 30 15:11:50 crc kubenswrapper[4793]: I0130 15:11:50.870712 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9fc5f8f6-nj7xv_7c37d49c-cbd6-47d6-8f29-51ec6fac2f61/horizon/2.log" Jan 30 15:11:51 crc kubenswrapper[4793]: I0130 15:11:51.101983 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9fc5f8f6-nj7xv_7c37d49c-cbd6-47d6-8f29-51ec6fac2f61/horizon/1.log" Jan 30 15:11:51 crc kubenswrapper[4793]: I0130 15:11:51.217950 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp_ae4f8964-b104-43bb-8356-bb53a9635527/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:51 crc kubenswrapper[4793]: I0130 15:11:51.446077 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9fc5f8f6-nj7xv_7c37d49c-cbd6-47d6-8f29-51ec6fac2f61/horizon-log/0.log" Jan 30 15:11:51 crc kubenswrapper[4793]: I0130 15:11:51.691951 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496421-n28p5_617a2857-c4b0-4558-9834-551a98cd534f/keystone-cron/0.log" Jan 30 15:11:51 crc kubenswrapper[4793]: I0130 15:11:51.751674 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lqrxr_1ee9c552-088f-4e61-961e-7062bf6e874b/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:52 crc kubenswrapper[4793]: I0130 15:11:52.001079 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a3625667-be35-4d81-84f9-e00593f1c627/kube-state-metrics/0.log" Jan 30 15:11:52 crc kubenswrapper[4793]: I0130 15:11:52.297218 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2_96926233-9ce4-4a0b-bab4-d0c4fa90389b/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:52 crc kubenswrapper[4793]: I0130 15:11:52.441062 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-d689db86f-zslsz_0ed57c3d-4992-4cfa-8655-1587b5897df6/keystone-api/0.log" Jan 30 15:11:53 crc kubenswrapper[4793]: I0130 15:11:53.229799 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-668ffd44cc-lhns4_d9f34138-4dce-415b-ad20-cf0ba588f012/neutron-httpd/0.log" Jan 30 15:11:53 crc kubenswrapper[4793]: I0130 15:11:53.248230 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk_92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:53 crc kubenswrapper[4793]: I0130 15:11:53.532530 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-668ffd44cc-lhns4_d9f34138-4dce-415b-ad20-cf0ba588f012/neutron-api/0.log" Jan 30 15:11:54 crc kubenswrapper[4793]: I0130 15:11:54.103017 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7/nova-cell0-conductor-conductor/0.log" Jan 30 15:11:54 crc kubenswrapper[4793]: I0130 15:11:54.472022 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d2acd609-26c0-4b98-861f-a8b12fcd07bf/nova-cell1-conductor-conductor/0.log" Jan 30 15:11:54 crc kubenswrapper[4793]: I0130 15:11:54.801641 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_abaabb74-42dd-40b6-9cb7-69db46f235df/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 15:11:54 crc kubenswrapper[4793]: I0130 15:11:54.958865 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b4991f7-e6e6-4dfd-a75b-25a7506591e1/nova-api-log/0.log" Jan 30 15:11:55 crc kubenswrapper[4793]: I0130 15:11:55.113922 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-sk8t8_dfc4d2ba-0414-4f1e-8733-a75d39218ef8/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:55 crc kubenswrapper[4793]: I0130 15:11:55.314338 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_02223b96-2b8b-4d32-b7ba-9cb517e03f13/nova-metadata-log/0.log" Jan 30 15:11:55 crc kubenswrapper[4793]: I0130 15:11:55.457519 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b4991f7-e6e6-4dfd-a75b-25a7506591e1/nova-api-api/0.log" Jan 30 15:11:55 crc kubenswrapper[4793]: I0130 15:11:55.682782 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_41e0025f-6abc-4554-b7a0-c132607aec86/mysql-bootstrap/0.log" Jan 30 15:11:55 crc kubenswrapper[4793]: I0130 15:11:55.988862 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_41e0025f-6abc-4554-b7a0-c132607aec86/mysql-bootstrap/0.log" Jan 30 15:11:56 crc kubenswrapper[4793]: I0130 15:11:56.003225 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_41e0025f-6abc-4554-b7a0-c132607aec86/galera/0.log" Jan 30 15:11:56 crc kubenswrapper[4793]: I0130 15:11:56.112932 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_9e04e820-112a-4afa-b908-f9b8be3e9e7c/nova-scheduler-scheduler/0.log" Jan 30 15:11:56 crc kubenswrapper[4793]: I0130 15:11:56.352664 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f45b0069-4cb7-4dfd-ac2d-1473cacbde1f/mysql-bootstrap/0.log" Jan 30 15:11:56 crc kubenswrapper[4793]: I0130 15:11:56.673137 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f45b0069-4cb7-4dfd-ac2d-1473cacbde1f/galera/0.log" Jan 30 15:11:56 crc kubenswrapper[4793]: I0130 15:11:56.684450 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f45b0069-4cb7-4dfd-ac2d-1473cacbde1f/mysql-bootstrap/0.log" Jan 30 15:11:56 crc kubenswrapper[4793]: I0130 15:11:56.944547 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7/openstackclient/0.log" Jan 30 15:11:57 crc kubenswrapper[4793]: I0130 15:11:57.110242 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-45fd5_230700ff-5087-4d0d-9d93-90b597d2ef72/ovn-controller/0.log" Jan 30 15:11:57 crc kubenswrapper[4793]: I0130 15:11:57.151594 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vx7z5_2eaf3033-e5f4-48bc-bdee-b7d97e57e765/openstack-network-exporter/0.log" Jan 30 15:11:57 crc kubenswrapper[4793]: I0130 15:11:57.395244 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_02223b96-2b8b-4d32-b7ba-9cb517e03f13/nova-metadata-metadata/0.log" Jan 30 15:11:57 crc kubenswrapper[4793]: I0130 15:11:57.543677 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56x4d_f6d71a04-6d3d-4444-9963-950135c3d6da/ovsdb-server-init/0.log" Jan 30 15:11:57 crc kubenswrapper[4793]: I0130 15:11:57.796695 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56x4d_f6d71a04-6d3d-4444-9963-950135c3d6da/ovsdb-server-init/0.log" Jan 30 15:11:57 crc kubenswrapper[4793]: I0130 15:11:57.806555 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56x4d_f6d71a04-6d3d-4444-9963-950135c3d6da/ovsdb-server/0.log" Jan 30 15:11:57 crc kubenswrapper[4793]: I0130 15:11:57.850806 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56x4d_f6d71a04-6d3d-4444-9963-950135c3d6da/ovs-vswitchd/0.log" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.111851 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-45sz7_dbd66148-cdd0-4e92-9601-3ef1576a5d3f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.140622 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_270527bd-015e-4904-8916-07993e081611/openstack-network-exporter/0.log" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.244803 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_270527bd-015e-4904-8916-07993e081611/ovn-northd/0.log" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.607346 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_89e99d15-97ad-4ac5-ba68-82ef88460222/memcached/0.log" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.668183 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bfa8998b-ee3a-4aea-80e8-c59620a5308a/openstack-network-exporter/0.log" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.678341 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hsmd9"] Jan 30 15:11:58 crc kubenswrapper[4793]: E0130 15:11:58.678738 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="registry-server" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.678754 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="registry-server" Jan 30 15:11:58 crc kubenswrapper[4793]: E0130 15:11:58.678776 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="extract-utilities" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.678784 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="extract-utilities" Jan 30 15:11:58 crc kubenswrapper[4793]: E0130 15:11:58.678798 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52e7a23-6edb-43d6-9726-23c6796194b1" containerName="container-00" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.678804 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52e7a23-6edb-43d6-9726-23c6796194b1" containerName="container-00" Jan 30 15:11:58 crc kubenswrapper[4793]: E0130 15:11:58.678812 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="extract-content" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.678817 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="extract-content" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.678993 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b52e7a23-6edb-43d6-9726-23c6796194b1" containerName="container-00" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.679007 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa561ef-e4d7-4893-bec0-ff16ee72f7b8" containerName="registry-server" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.680184 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.704741 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hsmd9"] Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.755671 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bfa8998b-ee3a-4aea-80e8-c59620a5308a/ovsdbserver-nb/0.log" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.776297 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-catalog-content\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.776548 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp5gr\" (UniqueName: \"kubernetes.io/projected/e3db2a3d-671e-4af9-8758-032ec6169132-kube-api-access-fp5gr\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.776851 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-utilities\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.879396 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-catalog-content\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.879451 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp5gr\" (UniqueName: \"kubernetes.io/projected/e3db2a3d-671e-4af9-8758-032ec6169132-kube-api-access-fp5gr\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.879507 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-utilities\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.880136 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-utilities\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.880129 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-catalog-content\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.897255 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp5gr\" (UniqueName: \"kubernetes.io/projected/e3db2a3d-671e-4af9-8758-032ec6169132-kube-api-access-fp5gr\") pod \"redhat-operators-hsmd9\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:58 crc kubenswrapper[4793]: I0130 15:11:58.981198 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_285be7d6-1f03-43af-8087-46ba257183ec/ovsdbserver-sb/0.log" Jan 30 15:11:59 crc kubenswrapper[4793]: I0130 15:11:59.013263 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:11:59 crc kubenswrapper[4793]: I0130 15:11:59.052159 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_285be7d6-1f03-43af-8087-46ba257183ec/openstack-network-exporter/0.log" Jan 30 15:11:59 crc kubenswrapper[4793]: I0130 15:11:59.450594 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65f95549b8-wtpxl_57bfc822-1d30-49bc-a077-686b68e9c1e6/placement-api/0.log" Jan 30 15:11:59 crc kubenswrapper[4793]: I0130 15:11:59.518561 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65f95549b8-wtpxl_57bfc822-1d30-49bc-a077-686b68e9c1e6/placement-log/0.log" Jan 30 15:11:59 crc kubenswrapper[4793]: I0130 15:11:59.547537 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hsmd9"] Jan 30 15:11:59 crc kubenswrapper[4793]: I0130 15:11:59.673796 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b0247ba-adfd-4195-bf23-91478001fed7/setup-container/0.log" Jan 30 15:11:59 crc kubenswrapper[4793]: I0130 15:11:59.771068 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsmd9" event={"ID":"e3db2a3d-671e-4af9-8758-032ec6169132","Type":"ContainerStarted","Data":"18b8805c99c2d22576ab45c0c54990056672997e71533374fa339804e56b3512"} Jan 30 15:11:59 crc kubenswrapper[4793]: I0130 15:11:59.937649 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b0247ba-adfd-4195-bf23-91478001fed7/setup-container/0.log" Jan 30 15:12:00 crc kubenswrapper[4793]: I0130 15:12:00.048945 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b0247ba-adfd-4195-bf23-91478001fed7/rabbitmq/0.log" Jan 30 15:12:00 crc kubenswrapper[4793]: I0130 15:12:00.123064 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7ffc0461-9589-45f5-a656-85cc01de58ed/setup-container/0.log" Jan 30 15:12:00 crc kubenswrapper[4793]: I0130 15:12:00.497933 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7ffc0461-9589-45f5-a656-85cc01de58ed/rabbitmq/0.log" Jan 30 15:12:00 crc kubenswrapper[4793]: I0130 15:12:00.596066 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7_0538b501-a861-4302-b26e-f5cfb17ed62a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:12:00 crc kubenswrapper[4793]: I0130 15:12:00.756737 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7ffc0461-9589-45f5-a656-85cc01de58ed/setup-container/0.log" Jan 30 15:12:00 crc kubenswrapper[4793]: I0130 15:12:00.780680 4793 generic.go:334] "Generic (PLEG): container finished" podID="e3db2a3d-671e-4af9-8758-032ec6169132" containerID="06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf" exitCode=0 Jan 30 15:12:00 crc kubenswrapper[4793]: I0130 15:12:00.780712 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsmd9" event={"ID":"e3db2a3d-671e-4af9-8758-032ec6169132","Type":"ContainerDied","Data":"06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf"} Jan 30 15:12:00 crc kubenswrapper[4793]: I0130 15:12:00.930862 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-t7bl5_b89c70f6-dabd-4984-8f21-235a9ab2f307/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.027283 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8_03127c65-edbf-41bd-9543-35ae0eddbff6/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.153962 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-j5q58_7915ec77-ca16-4f23-a367-42b525c80284/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.398868 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:12:01 crc kubenswrapper[4793]: E0130 15:12:01.399179 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.465255 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-nlncv_3cad1dbc-effe-48d8-af45-df0a45e16783/ssh-known-hosts-edpm-deployment/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.474861 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7767cf976c-8m6hn_de3851c3-345e-41a1-ad9e-ee3f4e357d85/proxy-server/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.767598 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/account-auditor/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.795622 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-q459t_50011731-846f-4e86-8664-f9c797dc64ed/swift-ring-rebalance/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.822471 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7767cf976c-8m6hn_de3851c3-345e-41a1-ad9e-ee3f4e357d85/proxy-httpd/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.888424 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/account-reaper/0.log" Jan 30 15:12:01 crc kubenswrapper[4793]: I0130 15:12:01.999668 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/account-replicator/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.052590 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/account-server/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.060147 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/container-replicator/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.083572 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/container-auditor/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.171590 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/container-server/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.246734 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/container-updater/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.324882 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-auditor/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.336244 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-expirer/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.393283 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-replicator/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.460370 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-server/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.516950 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-updater/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.618865 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/swift-recon-cron/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.621400 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/rsync/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.790452 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb_8b1317e1-63f1-4b06-aa31-5df5459c6ce6/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.800378 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsmd9" event={"ID":"e3db2a3d-671e-4af9-8758-032ec6169132","Type":"ContainerStarted","Data":"a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5"} Jan 30 15:12:02 crc kubenswrapper[4793]: I0130 15:12:02.958475 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_4bf53e2d-d024-4526-ada2-0ee6b461babb/tempest-tests-tempest-tests-runner/0.log" Jan 30 15:12:03 crc kubenswrapper[4793]: I0130 15:12:03.019709 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_8de9d25e-7ca7-4338-a64e-ed95f7bd9de9/test-operator-logs-container/0.log" Jan 30 15:12:03 crc kubenswrapper[4793]: I0130 15:12:03.142442 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt_dcc6f491-d722-48e4-bcb8-8a9de7603786/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:12:12 crc kubenswrapper[4793]: I0130 15:12:12.884953 4793 generic.go:334] "Generic (PLEG): container finished" podID="e3db2a3d-671e-4af9-8758-032ec6169132" containerID="a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5" exitCode=0 Jan 30 15:12:12 crc kubenswrapper[4793]: I0130 15:12:12.885009 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsmd9" event={"ID":"e3db2a3d-671e-4af9-8758-032ec6169132","Type":"ContainerDied","Data":"a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5"} Jan 30 15:12:13 crc kubenswrapper[4793]: I0130 15:12:13.398837 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:12:13 crc kubenswrapper[4793]: E0130 15:12:13.399228 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:12:13 crc kubenswrapper[4793]: I0130 15:12:13.897568 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsmd9" event={"ID":"e3db2a3d-671e-4af9-8758-032ec6169132","Type":"ContainerStarted","Data":"4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6"} Jan 30 15:12:13 crc kubenswrapper[4793]: I0130 15:12:13.915318 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hsmd9" podStartSLOduration=3.132562285 podStartE2EDuration="15.915300143s" podCreationTimestamp="2026-01-30 15:11:58 +0000 UTC" firstStartedPulling="2026-01-30 15:12:00.782023758 +0000 UTC m=+5331.483372249" lastFinishedPulling="2026-01-30 15:12:13.564761626 +0000 UTC m=+5344.266110107" observedRunningTime="2026-01-30 15:12:13.912205266 +0000 UTC m=+5344.613553767" watchObservedRunningTime="2026-01-30 15:12:13.915300143 +0000 UTC m=+5344.616648634" Jan 30 15:12:19 crc kubenswrapper[4793]: I0130 15:12:19.013440 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:12:19 crc kubenswrapper[4793]: I0130 15:12:19.014879 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:12:20 crc kubenswrapper[4793]: I0130 15:12:20.063565 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hsmd9" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="registry-server" probeResult="failure" output=< Jan 30 15:12:20 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:12:20 crc kubenswrapper[4793]: > Jan 30 15:12:26 crc kubenswrapper[4793]: I0130 15:12:26.398193 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:12:26 crc kubenswrapper[4793]: E0130 15:12:26.399037 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:12:29 crc kubenswrapper[4793]: I0130 15:12:29.066022 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:12:29 crc kubenswrapper[4793]: I0130 15:12:29.120909 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:12:30 crc kubenswrapper[4793]: I0130 15:12:30.288887 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hsmd9"] Jan 30 15:12:30 crc kubenswrapper[4793]: I0130 15:12:30.426827 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-8bg6c_ec981da4-a3ba-4e4e-a0eb-2168ab79fe77/manager/0.log" Jan 30 15:12:30 crc kubenswrapper[4793]: I0130 15:12:30.587619 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/util/0.log" Jan 30 15:12:30 crc kubenswrapper[4793]: I0130 15:12:30.785872 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/util/0.log" Jan 30 15:12:30 crc kubenswrapper[4793]: I0130 15:12:30.822285 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/pull/0.log" Jan 30 15:12:30 crc kubenswrapper[4793]: I0130 15:12:30.842995 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/pull/0.log" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.046627 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hsmd9" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="registry-server" containerID="cri-o://4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6" gracePeriod=2 Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.066024 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/util/0.log" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.125662 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/extract/0.log" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.208384 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/pull/0.log" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.426441 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-9kwwr_8835e5d9-c37d-4744-95cb-c56c10a58647/manager/0.log" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.498515 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-hjpkr_6f991e04-2db3-4b32-bc83-8bbce4ce7a08/manager/0.log" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.534620 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.601987 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp5gr\" (UniqueName: \"kubernetes.io/projected/e3db2a3d-671e-4af9-8758-032ec6169132-kube-api-access-fp5gr\") pod \"e3db2a3d-671e-4af9-8758-032ec6169132\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.602108 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-utilities\") pod \"e3db2a3d-671e-4af9-8758-032ec6169132\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.602163 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-catalog-content\") pod \"e3db2a3d-671e-4af9-8758-032ec6169132\" (UID: \"e3db2a3d-671e-4af9-8758-032ec6169132\") " Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.604527 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-utilities" (OuterVolumeSpecName: "utilities") pod "e3db2a3d-671e-4af9-8758-032ec6169132" (UID: "e3db2a3d-671e-4af9-8758-032ec6169132"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.610171 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3db2a3d-671e-4af9-8758-032ec6169132-kube-api-access-fp5gr" (OuterVolumeSpecName: "kube-api-access-fp5gr") pod "e3db2a3d-671e-4af9-8758-032ec6169132" (UID: "e3db2a3d-671e-4af9-8758-032ec6169132"). InnerVolumeSpecName "kube-api-access-fp5gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.704292 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp5gr\" (UniqueName: \"kubernetes.io/projected/e3db2a3d-671e-4af9-8758-032ec6169132-kube-api-access-fp5gr\") on node \"crc\" DevicePath \"\"" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.704324 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.708105 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-65wdd"] Jan 30 15:12:31 crc kubenswrapper[4793]: E0130 15:12:31.708479 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="extract-content" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.708497 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="extract-content" Jan 30 15:12:31 crc kubenswrapper[4793]: E0130 15:12:31.708525 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="extract-utilities" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.708532 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="extract-utilities" Jan 30 15:12:31 crc kubenswrapper[4793]: E0130 15:12:31.708555 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="registry-server" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.708561 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="registry-server" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.708741 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" containerName="registry-server" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.727739 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.730842 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65wdd"] Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.780368 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3db2a3d-671e-4af9-8758-032ec6169132" (UID: "e3db2a3d-671e-4af9-8758-032ec6169132"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.806098 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3db2a3d-671e-4af9-8758-032ec6169132-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.860397 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-g5848_1d859404-a29c-46c9-b66a-fed5ff0b13f0/manager/0.log" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.893957 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-k4tz9_8d24cd33-2902-424a-8ffc-76b1e4c2f482/manager/0.log" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.907695 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-utilities\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.907770 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-catalog-content\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:31 crc kubenswrapper[4793]: I0130 15:12:31.907860 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbgnl\" (UniqueName: \"kubernetes.io/projected/60dfbdf5-5a19-4864-b113-60e96a555304-kube-api-access-lbgnl\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.010087 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbgnl\" (UniqueName: \"kubernetes.io/projected/60dfbdf5-5a19-4864-b113-60e96a555304-kube-api-access-lbgnl\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.010270 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-utilities\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.010355 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-catalog-content\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.010809 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-catalog-content\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.010818 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-utilities\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.025456 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbgnl\" (UniqueName: \"kubernetes.io/projected/60dfbdf5-5a19-4864-b113-60e96a555304-kube-api-access-lbgnl\") pod \"certified-operators-65wdd\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.057472 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.058327 4793 generic.go:334] "Generic (PLEG): container finished" podID="e3db2a3d-671e-4af9-8758-032ec6169132" containerID="4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6" exitCode=0 Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.058429 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsmd9" event={"ID":"e3db2a3d-671e-4af9-8758-032ec6169132","Type":"ContainerDied","Data":"4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6"} Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.058508 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsmd9" event={"ID":"e3db2a3d-671e-4af9-8758-032ec6169132","Type":"ContainerDied","Data":"18b8805c99c2d22576ab45c0c54990056672997e71533374fa339804e56b3512"} Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.058576 4793 scope.go:117] "RemoveContainer" containerID="4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.058740 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hsmd9" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.112127 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hsmd9"] Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.126289 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hsmd9"] Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.136137 4793 scope.go:117] "RemoveContainer" containerID="a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.174919 4793 scope.go:117] "RemoveContainer" containerID="06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.231532 4793 scope.go:117] "RemoveContainer" containerID="4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6" Jan 30 15:12:32 crc kubenswrapper[4793]: E0130 15:12:32.232635 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6\": container with ID starting with 4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6 not found: ID does not exist" containerID="4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.232697 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6"} err="failed to get container status \"4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6\": rpc error: code = NotFound desc = could not find container \"4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6\": container with ID starting with 4655af87a8a3b0b396d3e86ce4ff69f93f7ca841ce97bd0eac399121da43e2b6 not found: ID does not exist" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.232740 4793 scope.go:117] "RemoveContainer" containerID="a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5" Jan 30 15:12:32 crc kubenswrapper[4793]: E0130 15:12:32.233313 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5\": container with ID starting with a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5 not found: ID does not exist" containerID="a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.233342 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5"} err="failed to get container status \"a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5\": rpc error: code = NotFound desc = could not find container \"a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5\": container with ID starting with a57240d95a3b29acda2b4d979c3bf16aacdbff69c5f11b5675df3ef9b2c2e7a5 not found: ID does not exist" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.233362 4793 scope.go:117] "RemoveContainer" containerID="06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf" Jan 30 15:12:32 crc kubenswrapper[4793]: E0130 15:12:32.234703 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf\": container with ID starting with 06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf not found: ID does not exist" containerID="06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.234849 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf"} err="failed to get container status \"06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf\": rpc error: code = NotFound desc = could not find container \"06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf\": container with ID starting with 06b166c63c2db4e3c0361ead05c13479dd50d06bd8f301bae6d96ce760142ebf not found: ID does not exist" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.452207 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3db2a3d-671e-4af9-8758-032ec6169132" path="/var/lib/kubelet/pods/e3db2a3d-671e-4af9-8758-032ec6169132/volumes" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.600963 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-m4q78_710c57e4-a09e-4db1-a03b-13db05085d41/manager/0.log" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.670844 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-khfs7_97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642/manager/0.log" Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.678002 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65wdd"] Jan 30 15:12:32 crc kubenswrapper[4793]: I0130 15:12:32.897852 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-v77jx_7c34e714-0f18-4e41-ab9c-1dfe4859e644/manager/0.log" Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.066253 4793 generic.go:334] "Generic (PLEG): container finished" podID="60dfbdf5-5a19-4864-b113-60e96a555304" containerID="5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef" exitCode=0 Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.066313 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65wdd" event={"ID":"60dfbdf5-5a19-4864-b113-60e96a555304","Type":"ContainerDied","Data":"5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef"} Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.066337 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65wdd" event={"ID":"60dfbdf5-5a19-4864-b113-60e96a555304","Type":"ContainerStarted","Data":"329063fb66b0af99c37d443f70678ace1de380ba2fc9bb63f01f69a193285a8a"} Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.068033 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.109342 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-82cvq_bdcd04f7-09fa-4b1b-8b99-3de61a28a337/manager/0.log" Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.156695 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-9ftxd_ce9be14f-8255-421e-91b4-a30fc5482ff4/manager/0.log" Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.362380 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-n29l5_fa88d14c-0581-439c-9da1-f1123e41a65a/manager/0.log" Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.445333 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-x6pk6_05415bc7-22dc-4b15-a047-6ed62755638d/manager/0.log" Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.724828 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-vtx9d_31ca6ac1-d2da-4325-baa4-e18fc3514721/manager/0.log" Jan 30 15:12:33 crc kubenswrapper[4793]: I0130 15:12:33.759489 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-5nsr4_53576ec8-2f6d-4781-8906-726529cc6049/manager/0.log" Jan 30 15:12:34 crc kubenswrapper[4793]: I0130 15:12:34.195404 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs_e446e97c-6e9f-4dc2-b5fd-fb63451fd326/manager/0.log" Jan 30 15:12:34 crc kubenswrapper[4793]: I0130 15:12:34.333645 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-977cfdb67-sp4rd_2cec3782-823b-4ddf-909a-e773203cd721/operator/0.log" Jan 30 15:12:34 crc kubenswrapper[4793]: I0130 15:12:34.781338 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-x56zx_e3b6e703-4540-4739-87cd-8699d4e04903/registry-server/0.log" Jan 30 15:12:35 crc kubenswrapper[4793]: I0130 15:12:35.059326 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-27flx_02b8e60c-3514-4d72-bde6-5af374a926b1/manager/0.log" Jan 30 15:12:35 crc kubenswrapper[4793]: I0130 15:12:35.084550 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65wdd" event={"ID":"60dfbdf5-5a19-4864-b113-60e96a555304","Type":"ContainerStarted","Data":"e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68"} Jan 30 15:12:35 crc kubenswrapper[4793]: I0130 15:12:35.211172 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-4ml88_6231ed92-57a8-4c48-9c75-e916940b22ea/manager/0.log" Jan 30 15:12:35 crc kubenswrapper[4793]: I0130 15:12:35.351782 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-nb4g2_2aae677d-830b-44b8-a792-3d0b527aee89/operator/0.log" Jan 30 15:12:35 crc kubenswrapper[4793]: I0130 15:12:35.488989 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75c5857d49-pm446_e9854850-e645-4364-a471-bef994f8536c/manager/0.log" Jan 30 15:12:35 crc kubenswrapper[4793]: I0130 15:12:35.546202 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-vxhpt_3eb94c51-d506-4273-898b-dba537cabea6/manager/0.log" Jan 30 15:12:35 crc kubenswrapper[4793]: I0130 15:12:35.753205 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-tv5vr_6b21b0ca-d506-4b1b-b6e1-06e2a96ae033/manager/0.log" Jan 30 15:12:35 crc kubenswrapper[4793]: I0130 15:12:35.839871 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-qb5xp_5e215cef-de14-424d-9028-a48bad979192/manager/0.log" Jan 30 15:12:36 crc kubenswrapper[4793]: I0130 15:12:36.000830 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-btjpp_f65e9448-ee4e-4f22-9bd7-ecf650cb36b5/manager/0.log" Jan 30 15:12:36 crc kubenswrapper[4793]: I0130 15:12:36.093200 4793 generic.go:334] "Generic (PLEG): container finished" podID="60dfbdf5-5a19-4864-b113-60e96a555304" containerID="e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68" exitCode=0 Jan 30 15:12:36 crc kubenswrapper[4793]: I0130 15:12:36.093238 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65wdd" event={"ID":"60dfbdf5-5a19-4864-b113-60e96a555304","Type":"ContainerDied","Data":"e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68"} Jan 30 15:12:37 crc kubenswrapper[4793]: I0130 15:12:37.104642 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65wdd" event={"ID":"60dfbdf5-5a19-4864-b113-60e96a555304","Type":"ContainerStarted","Data":"fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1"} Jan 30 15:12:37 crc kubenswrapper[4793]: I0130 15:12:37.398284 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:12:37 crc kubenswrapper[4793]: E0130 15:12:37.398554 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:12:42 crc kubenswrapper[4793]: I0130 15:12:42.058944 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:42 crc kubenswrapper[4793]: I0130 15:12:42.060353 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:42 crc kubenswrapper[4793]: I0130 15:12:42.112126 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:42 crc kubenswrapper[4793]: I0130 15:12:42.138657 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-65wdd" podStartSLOduration=7.663938034 podStartE2EDuration="11.13863816s" podCreationTimestamp="2026-01-30 15:12:31 +0000 UTC" firstStartedPulling="2026-01-30 15:12:33.067833762 +0000 UTC m=+5363.769182253" lastFinishedPulling="2026-01-30 15:12:36.542533888 +0000 UTC m=+5367.243882379" observedRunningTime="2026-01-30 15:12:37.140723305 +0000 UTC m=+5367.842071806" watchObservedRunningTime="2026-01-30 15:12:42.13863816 +0000 UTC m=+5372.839986651" Jan 30 15:12:42 crc kubenswrapper[4793]: I0130 15:12:42.183541 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:42 crc kubenswrapper[4793]: I0130 15:12:42.348677 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65wdd"] Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.154269 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-65wdd" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" containerName="registry-server" containerID="cri-o://fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1" gracePeriod=2 Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.661967 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.795335 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-catalog-content\") pod \"60dfbdf5-5a19-4864-b113-60e96a555304\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.795462 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-utilities\") pod \"60dfbdf5-5a19-4864-b113-60e96a555304\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.795576 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbgnl\" (UniqueName: \"kubernetes.io/projected/60dfbdf5-5a19-4864-b113-60e96a555304-kube-api-access-lbgnl\") pod \"60dfbdf5-5a19-4864-b113-60e96a555304\" (UID: \"60dfbdf5-5a19-4864-b113-60e96a555304\") " Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.796399 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-utilities" (OuterVolumeSpecName: "utilities") pod "60dfbdf5-5a19-4864-b113-60e96a555304" (UID: "60dfbdf5-5a19-4864-b113-60e96a555304"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.818234 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60dfbdf5-5a19-4864-b113-60e96a555304-kube-api-access-lbgnl" (OuterVolumeSpecName: "kube-api-access-lbgnl") pod "60dfbdf5-5a19-4864-b113-60e96a555304" (UID: "60dfbdf5-5a19-4864-b113-60e96a555304"). InnerVolumeSpecName "kube-api-access-lbgnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.856440 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60dfbdf5-5a19-4864-b113-60e96a555304" (UID: "60dfbdf5-5a19-4864-b113-60e96a555304"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.899230 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbgnl\" (UniqueName: \"kubernetes.io/projected/60dfbdf5-5a19-4864-b113-60e96a555304-kube-api-access-lbgnl\") on node \"crc\" DevicePath \"\"" Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.899259 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:12:44 crc kubenswrapper[4793]: I0130 15:12:44.899270 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60dfbdf5-5a19-4864-b113-60e96a555304-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.164775 4793 generic.go:334] "Generic (PLEG): container finished" podID="60dfbdf5-5a19-4864-b113-60e96a555304" containerID="fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1" exitCode=0 Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.164817 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65wdd" event={"ID":"60dfbdf5-5a19-4864-b113-60e96a555304","Type":"ContainerDied","Data":"fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1"} Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.164843 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65wdd" event={"ID":"60dfbdf5-5a19-4864-b113-60e96a555304","Type":"ContainerDied","Data":"329063fb66b0af99c37d443f70678ace1de380ba2fc9bb63f01f69a193285a8a"} Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.164861 4793 scope.go:117] "RemoveContainer" containerID="fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.164987 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65wdd" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.186175 4793 scope.go:117] "RemoveContainer" containerID="e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.208905 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65wdd"] Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.218968 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-65wdd"] Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.226007 4793 scope.go:117] "RemoveContainer" containerID="5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.260891 4793 scope.go:117] "RemoveContainer" containerID="fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1" Jan 30 15:12:45 crc kubenswrapper[4793]: E0130 15:12:45.263663 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1\": container with ID starting with fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1 not found: ID does not exist" containerID="fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.263718 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1"} err="failed to get container status \"fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1\": rpc error: code = NotFound desc = could not find container \"fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1\": container with ID starting with fcdb825af79f94c92ca2fdd75d975216b9937ecc5c7927c8176e1f614dbbaba1 not found: ID does not exist" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.263746 4793 scope.go:117] "RemoveContainer" containerID="e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68" Jan 30 15:12:45 crc kubenswrapper[4793]: E0130 15:12:45.264222 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68\": container with ID starting with e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68 not found: ID does not exist" containerID="e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.264256 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68"} err="failed to get container status \"e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68\": rpc error: code = NotFound desc = could not find container \"e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68\": container with ID starting with e6dd34c801a65ae6c1e8b8271dfd4f4f50442bf45eab052ad7c0c20b0d63ec68 not found: ID does not exist" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.264279 4793 scope.go:117] "RemoveContainer" containerID="5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef" Jan 30 15:12:45 crc kubenswrapper[4793]: E0130 15:12:45.264683 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef\": container with ID starting with 5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef not found: ID does not exist" containerID="5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef" Jan 30 15:12:45 crc kubenswrapper[4793]: I0130 15:12:45.264731 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef"} err="failed to get container status \"5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef\": rpc error: code = NotFound desc = could not find container \"5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef\": container with ID starting with 5fd96d411721b8cf7702fae273a7304010a66d4d7dfa87cd57d2faebbf8a76ef not found: ID does not exist" Jan 30 15:12:46 crc kubenswrapper[4793]: I0130 15:12:46.408897 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" path="/var/lib/kubelet/pods/60dfbdf5-5a19-4864-b113-60e96a555304/volumes" Jan 30 15:12:48 crc kubenswrapper[4793]: I0130 15:12:48.400515 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:12:48 crc kubenswrapper[4793]: E0130 15:12:48.402385 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:12:56 crc kubenswrapper[4793]: I0130 15:12:56.401183 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vqxml_10c05bcf-ffb2-4175-b323-067804ea3391/control-plane-machine-set-operator/1.log" Jan 30 15:12:56 crc kubenswrapper[4793]: I0130 15:12:56.442357 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vqxml_10c05bcf-ffb2-4175-b323-067804ea3391/control-plane-machine-set-operator/0.log" Jan 30 15:12:56 crc kubenswrapper[4793]: I0130 15:12:56.723076 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-56g7n_afa7929d-37a8-4fa2-9733-158cab1c40ec/kube-rbac-proxy/0.log" Jan 30 15:12:56 crc kubenswrapper[4793]: I0130 15:12:56.728970 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-56g7n_afa7929d-37a8-4fa2-9733-158cab1c40ec/machine-api-operator/0.log" Jan 30 15:13:00 crc kubenswrapper[4793]: I0130 15:13:00.429600 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:13:00 crc kubenswrapper[4793]: E0130 15:13:00.430597 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:13:10 crc kubenswrapper[4793]: I0130 15:13:10.188807 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-26t5l_1b680507-f432-4019-b372-d9452d89aa97/cert-manager-controller/0.log" Jan 30 15:13:10 crc kubenswrapper[4793]: I0130 15:13:10.484860 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-tzjhq_8fd78cec-1c0f-427e-8224-4021da0ede3c/cert-manager-cainjector/0.log" Jan 30 15:13:10 crc kubenswrapper[4793]: I0130 15:13:10.630194 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-lm7l8_e88efb4a-1489-4847-adb4-230a8b5db6ef/cert-manager-webhook/0.log" Jan 30 15:13:15 crc kubenswrapper[4793]: I0130 15:13:15.399531 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:13:16 crc kubenswrapper[4793]: I0130 15:13:16.463843 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"2e917dcf8d0541fa761d833d92780fc95c344c876dc9aae353982d89d80846a5"} Jan 30 15:13:26 crc kubenswrapper[4793]: I0130 15:13:26.283156 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-kc5ft_5df01042-63fe-458a-b71d-d1f9bdf9ea66/nmstate-console-plugin/0.log" Jan 30 15:13:26 crc kubenswrapper[4793]: I0130 15:13:26.488992 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-2gwr6_1a7bdce5-b625-40ce-b674-a834fcd178a8/kube-rbac-proxy/0.log" Jan 30 15:13:26 crc kubenswrapper[4793]: I0130 15:13:26.549142 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-dh9db_e635e428-77d8-44fb-baa4-1af4bd603c10/nmstate-handler/0.log" Jan 30 15:13:26 crc kubenswrapper[4793]: I0130 15:13:26.631004 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-2gwr6_1a7bdce5-b625-40ce-b674-a834fcd178a8/nmstate-metrics/0.log" Jan 30 15:13:26 crc kubenswrapper[4793]: I0130 15:13:26.707177 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-9bsps_1f691ecb-c128-4332-a7ab-c4e173490f50/nmstate-operator/0.log" Jan 30 15:13:26 crc kubenswrapper[4793]: I0130 15:13:26.843297 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-hw489_68bcadc4-02c3-44c0-a252-0606ff1f0a09/nmstate-webhook/0.log" Jan 30 15:13:54 crc kubenswrapper[4793]: I0130 15:13:54.519307 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7nlfd_34253a93-968b-47e2-aa0d-43ddb72f29f5/kube-rbac-proxy/0.log" Jan 30 15:13:54 crc kubenswrapper[4793]: I0130 15:13:54.628145 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7nlfd_34253a93-968b-47e2-aa0d-43ddb72f29f5/controller/0.log" Jan 30 15:13:54 crc kubenswrapper[4793]: I0130 15:13:54.764619 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-frr-files/0.log" Jan 30 15:13:54 crc kubenswrapper[4793]: I0130 15:13:54.993149 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-frr-files/0.log" Jan 30 15:13:54 crc kubenswrapper[4793]: I0130 15:13:54.995317 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-metrics/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.024594 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-reloader/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.027430 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-reloader/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.240513 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-metrics/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.252136 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-metrics/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.257356 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-frr-files/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.304552 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-reloader/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.478101 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-frr-files/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.505649 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-reloader/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.521742 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-metrics/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.547534 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/controller/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.793426 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/frr-metrics/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.833175 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/kube-rbac-proxy/0.log" Jan 30 15:13:55 crc kubenswrapper[4793]: I0130 15:13:55.839974 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/kube-rbac-proxy-frr/0.log" Jan 30 15:13:56 crc kubenswrapper[4793]: I0130 15:13:56.119460 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/reloader/0.log" Jan 30 15:13:56 crc kubenswrapper[4793]: I0130 15:13:56.158263 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-4p6gx_e5a76649-d081-4224-baca-095ca1ffadfd/frr-k8s-webhook-server/0.log" Jan 30 15:13:56 crc kubenswrapper[4793]: I0130 15:13:56.453694 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7fbd4d697c-ndglw_75266e51-59ee-432d-b56a-ba972e5ff25b/manager/0.log" Jan 30 15:13:56 crc kubenswrapper[4793]: I0130 15:13:56.651458 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6446fc49bd-rzbbm_45949f1b-1075-4d7f-9007-8525e0364a55/webhook-server/0.log" Jan 30 15:13:56 crc kubenswrapper[4793]: I0130 15:13:56.832798 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g9hvr_519ea47c-0d76-44cb-af34-823c71e508c9/kube-rbac-proxy/0.log" Jan 30 15:13:56 crc kubenswrapper[4793]: I0130 15:13:56.898294 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/frr/0.log" Jan 30 15:13:57 crc kubenswrapper[4793]: I0130 15:13:57.318313 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g9hvr_519ea47c-0d76-44cb-af34-823c71e508c9/speaker/0.log" Jan 30 15:14:00 crc kubenswrapper[4793]: I0130 15:14:00.538961 4793 scope.go:117] "RemoveContainer" containerID="2e68fc094c6474084a00ace7a1343c3281487ac0b42f6c0f86c4ce491d8395ce" Jan 30 15:14:00 crc kubenswrapper[4793]: I0130 15:14:00.562244 4793 scope.go:117] "RemoveContainer" containerID="d6973b535c9ecb060763fdccd1de889c01aef82d5985f11c0ff82c0869318f33" Jan 30 15:14:00 crc kubenswrapper[4793]: I0130 15:14:00.608876 4793 scope.go:117] "RemoveContainer" containerID="4ac9e4de050e07af6f6a3d4ab7b9515ece2210c422a53f0f5e0a00047769d72b" Jan 30 15:14:11 crc kubenswrapper[4793]: I0130 15:14:11.929450 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/util/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.156228 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/util/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.214499 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/pull/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.229020 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/pull/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.422558 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/extract/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.459684 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/util/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.460505 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/pull/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.651857 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/util/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.846653 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/util/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.886357 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/pull/0.log" Jan 30 15:14:12 crc kubenswrapper[4793]: I0130 15:14:12.910653 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/pull/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.187670 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/util/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.188551 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/extract/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.236785 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/pull/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.459521 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-utilities/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.647216 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-utilities/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.690527 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-content/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.690547 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-content/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.841765 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-utilities/0.log" Jan 30 15:14:13 crc kubenswrapper[4793]: I0130 15:14:13.871018 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-content/0.log" Jan 30 15:14:14 crc kubenswrapper[4793]: I0130 15:14:14.174251 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-utilities/0.log" Jan 30 15:14:14 crc kubenswrapper[4793]: I0130 15:14:14.501694 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-content/0.log" Jan 30 15:14:14 crc kubenswrapper[4793]: I0130 15:14:14.518003 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-utilities/0.log" Jan 30 15:14:14 crc kubenswrapper[4793]: I0130 15:14:14.555937 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-content/0.log" Jan 30 15:14:14 crc kubenswrapper[4793]: I0130 15:14:14.691263 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/registry-server/0.log" Jan 30 15:14:15 crc kubenswrapper[4793]: I0130 15:14:15.049858 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-content/0.log" Jan 30 15:14:15 crc kubenswrapper[4793]: I0130 15:14:15.053207 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-utilities/0.log" Jan 30 15:14:15 crc kubenswrapper[4793]: I0130 15:14:15.381629 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zkjbp_5834bf4b-676f-4ece-bcee-28949a7109ca/marketplace-operator/0.log" Jan 30 15:14:15 crc kubenswrapper[4793]: I0130 15:14:15.527223 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-utilities/0.log" Jan 30 15:14:15 crc kubenswrapper[4793]: I0130 15:14:15.639764 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-utilities/0.log" Jan 30 15:14:15 crc kubenswrapper[4793]: I0130 15:14:15.750810 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-content/0.log" Jan 30 15:14:15 crc kubenswrapper[4793]: I0130 15:14:15.825934 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/registry-server/0.log" Jan 30 15:14:15 crc kubenswrapper[4793]: I0130 15:14:15.850897 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-content/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.058632 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-content/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.079761 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-utilities/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.288957 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-utilities/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.337159 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/registry-server/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.518029 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-content/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.565565 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-content/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.574630 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-utilities/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.748908 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-content/0.log" Jan 30 15:14:16 crc kubenswrapper[4793]: I0130 15:14:16.775407 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-utilities/0.log" Jan 30 15:14:17 crc kubenswrapper[4793]: I0130 15:14:17.381003 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/registry-server/0.log" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.150181 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589"] Jan 30 15:15:00 crc kubenswrapper[4793]: E0130 15:15:00.151219 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" containerName="extract-utilities" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.151235 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" containerName="extract-utilities" Jan 30 15:15:00 crc kubenswrapper[4793]: E0130 15:15:00.151263 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" containerName="registry-server" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.151274 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" containerName="registry-server" Jan 30 15:15:00 crc kubenswrapper[4793]: E0130 15:15:00.151290 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" containerName="extract-content" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.151298 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" containerName="extract-content" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.151492 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="60dfbdf5-5a19-4864-b113-60e96a555304" containerName="registry-server" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.152356 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.155432 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.155966 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.217346 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589"] Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.309178 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b368057-7309-4308-9956-1850a9297956-secret-volume\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.309430 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77qcv\" (UniqueName: \"kubernetes.io/projected/9b368057-7309-4308-9956-1850a9297956-kube-api-access-77qcv\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.309499 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b368057-7309-4308-9956-1850a9297956-config-volume\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.411793 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77qcv\" (UniqueName: \"kubernetes.io/projected/9b368057-7309-4308-9956-1850a9297956-kube-api-access-77qcv\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.411854 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b368057-7309-4308-9956-1850a9297956-config-volume\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.411938 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b368057-7309-4308-9956-1850a9297956-secret-volume\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.412747 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b368057-7309-4308-9956-1850a9297956-config-volume\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.417482 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b368057-7309-4308-9956-1850a9297956-secret-volume\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.447339 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77qcv\" (UniqueName: \"kubernetes.io/projected/9b368057-7309-4308-9956-1850a9297956-kube-api-access-77qcv\") pod \"collect-profiles-29496435-9d589\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.475171 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:00 crc kubenswrapper[4793]: I0130 15:15:00.937021 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589"] Jan 30 15:15:01 crc kubenswrapper[4793]: I0130 15:15:01.753675 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" event={"ID":"9b368057-7309-4308-9956-1850a9297956","Type":"ContainerStarted","Data":"55b3bd49efdf664f3e5e3f8829bbad8853366867e6db9ad6f828d67ec343683a"} Jan 30 15:15:01 crc kubenswrapper[4793]: I0130 15:15:01.754016 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" event={"ID":"9b368057-7309-4308-9956-1850a9297956","Type":"ContainerStarted","Data":"0687be94c24be55440f411cf6b03ef0f1c8455e89eab84818c383651a859ab98"} Jan 30 15:15:01 crc kubenswrapper[4793]: I0130 15:15:01.772539 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" podStartSLOduration=1.772519518 podStartE2EDuration="1.772519518s" podCreationTimestamp="2026-01-30 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:15:01.767639009 +0000 UTC m=+5512.468987500" watchObservedRunningTime="2026-01-30 15:15:01.772519518 +0000 UTC m=+5512.473867999" Jan 30 15:15:02 crc kubenswrapper[4793]: E0130 15:15:02.922682 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b368057_7309_4308_9956_1850a9297956.slice/crio-55b3bd49efdf664f3e5e3f8829bbad8853366867e6db9ad6f828d67ec343683a.scope\": RecentStats: unable to find data in memory cache]" Jan 30 15:15:03 crc kubenswrapper[4793]: I0130 15:15:03.774411 4793 generic.go:334] "Generic (PLEG): container finished" podID="9b368057-7309-4308-9956-1850a9297956" containerID="55b3bd49efdf664f3e5e3f8829bbad8853366867e6db9ad6f828d67ec343683a" exitCode=0 Jan 30 15:15:03 crc kubenswrapper[4793]: I0130 15:15:03.774453 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" event={"ID":"9b368057-7309-4308-9956-1850a9297956","Type":"ContainerDied","Data":"55b3bd49efdf664f3e5e3f8829bbad8853366867e6db9ad6f828d67ec343683a"} Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.193868 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.315646 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77qcv\" (UniqueName: \"kubernetes.io/projected/9b368057-7309-4308-9956-1850a9297956-kube-api-access-77qcv\") pod \"9b368057-7309-4308-9956-1850a9297956\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.316116 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b368057-7309-4308-9956-1850a9297956-secret-volume\") pod \"9b368057-7309-4308-9956-1850a9297956\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.316266 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b368057-7309-4308-9956-1850a9297956-config-volume\") pod \"9b368057-7309-4308-9956-1850a9297956\" (UID: \"9b368057-7309-4308-9956-1850a9297956\") " Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.317099 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b368057-7309-4308-9956-1850a9297956-config-volume" (OuterVolumeSpecName: "config-volume") pod "9b368057-7309-4308-9956-1850a9297956" (UID: "9b368057-7309-4308-9956-1850a9297956"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.321163 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b368057-7309-4308-9956-1850a9297956-kube-api-access-77qcv" (OuterVolumeSpecName: "kube-api-access-77qcv") pod "9b368057-7309-4308-9956-1850a9297956" (UID: "9b368057-7309-4308-9956-1850a9297956"). InnerVolumeSpecName "kube-api-access-77qcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.321765 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b368057-7309-4308-9956-1850a9297956-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9b368057-7309-4308-9956-1850a9297956" (UID: "9b368057-7309-4308-9956-1850a9297956"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.418884 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b368057-7309-4308-9956-1850a9297956-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.418933 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b368057-7309-4308-9956-1850a9297956-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.418948 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77qcv\" (UniqueName: \"kubernetes.io/projected/9b368057-7309-4308-9956-1850a9297956-kube-api-access-77qcv\") on node \"crc\" DevicePath \"\"" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.792851 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" event={"ID":"9b368057-7309-4308-9956-1850a9297956","Type":"ContainerDied","Data":"0687be94c24be55440f411cf6b03ef0f1c8455e89eab84818c383651a859ab98"} Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.792890 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0687be94c24be55440f411cf6b03ef0f1c8455e89eab84818c383651a859ab98" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.792946 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496435-9d589" Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.867155 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn"] Jan 30 15:15:05 crc kubenswrapper[4793]: I0130 15:15:05.874796 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496390-tc6sn"] Jan 30 15:15:06 crc kubenswrapper[4793]: I0130 15:15:06.409446 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd3a15c-5ed4-45be-8091-84573a97a63a" path="/var/lib/kubelet/pods/afd3a15c-5ed4-45be-8091-84573a97a63a/volumes" Jan 30 15:15:42 crc kubenswrapper[4793]: I0130 15:15:42.413976 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:15:42 crc kubenswrapper[4793]: I0130 15:15:42.414593 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:16:00 crc kubenswrapper[4793]: I0130 15:16:00.687460 4793 scope.go:117] "RemoveContainer" containerID="1def2597602a7873d34fb216db52e7e4d4963d5b5a3ca0e36a14a7576a9a797f" Jan 30 15:16:12 crc kubenswrapper[4793]: I0130 15:16:12.413459 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:16:12 crc kubenswrapper[4793]: I0130 15:16:12.413861 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:16:42 crc kubenswrapper[4793]: I0130 15:16:42.414187 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:16:42 crc kubenswrapper[4793]: I0130 15:16:42.415356 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:16:42 crc kubenswrapper[4793]: I0130 15:16:42.415441 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 15:16:42 crc kubenswrapper[4793]: I0130 15:16:42.416668 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e917dcf8d0541fa761d833d92780fc95c344c876dc9aae353982d89d80846a5"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 15:16:42 crc kubenswrapper[4793]: I0130 15:16:42.416818 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://2e917dcf8d0541fa761d833d92780fc95c344c876dc9aae353982d89d80846a5" gracePeriod=600 Jan 30 15:16:42 crc kubenswrapper[4793]: I0130 15:16:42.695958 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="2e917dcf8d0541fa761d833d92780fc95c344c876dc9aae353982d89d80846a5" exitCode=0 Jan 30 15:16:42 crc kubenswrapper[4793]: I0130 15:16:42.695997 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"2e917dcf8d0541fa761d833d92780fc95c344c876dc9aae353982d89d80846a5"} Jan 30 15:16:42 crc kubenswrapper[4793]: I0130 15:16:42.696535 4793 scope.go:117] "RemoveContainer" containerID="fe210ffec30c66ef03e089d3fd74d3e97f69653430ee1b4e25a5745f320cbc71" Jan 30 15:16:43 crc kubenswrapper[4793]: I0130 15:16:43.707833 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce"} Jan 30 15:16:43 crc kubenswrapper[4793]: I0130 15:16:43.714674 4793 generic.go:334] "Generic (PLEG): container finished" podID="9cdbb05e-d475-48b2-9b59-297532883826" containerID="ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf" exitCode=0 Jan 30 15:16:43 crc kubenswrapper[4793]: I0130 15:16:43.714718 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg6df/must-gather-x5n45" event={"ID":"9cdbb05e-d475-48b2-9b59-297532883826","Type":"ContainerDied","Data":"ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf"} Jan 30 15:16:43 crc kubenswrapper[4793]: I0130 15:16:43.715503 4793 scope.go:117] "RemoveContainer" containerID="ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf" Jan 30 15:16:44 crc kubenswrapper[4793]: I0130 15:16:44.078300 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jg6df_must-gather-x5n45_9cdbb05e-d475-48b2-9b59-297532883826/gather/0.log" Jan 30 15:16:52 crc kubenswrapper[4793]: I0130 15:16:52.745016 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jg6df/must-gather-x5n45"] Jan 30 15:16:52 crc kubenswrapper[4793]: I0130 15:16:52.745751 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jg6df/must-gather-x5n45" podUID="9cdbb05e-d475-48b2-9b59-297532883826" containerName="copy" containerID="cri-o://4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b" gracePeriod=2 Jan 30 15:16:52 crc kubenswrapper[4793]: I0130 15:16:52.753488 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jg6df/must-gather-x5n45"] Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.165504 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jg6df_must-gather-x5n45_9cdbb05e-d475-48b2-9b59-297532883826/copy/0.log" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.166213 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.309717 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvqx6\" (UniqueName: \"kubernetes.io/projected/9cdbb05e-d475-48b2-9b59-297532883826-kube-api-access-nvqx6\") pod \"9cdbb05e-d475-48b2-9b59-297532883826\" (UID: \"9cdbb05e-d475-48b2-9b59-297532883826\") " Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.309935 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9cdbb05e-d475-48b2-9b59-297532883826-must-gather-output\") pod \"9cdbb05e-d475-48b2-9b59-297532883826\" (UID: \"9cdbb05e-d475-48b2-9b59-297532883826\") " Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.323403 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cdbb05e-d475-48b2-9b59-297532883826-kube-api-access-nvqx6" (OuterVolumeSpecName: "kube-api-access-nvqx6") pod "9cdbb05e-d475-48b2-9b59-297532883826" (UID: "9cdbb05e-d475-48b2-9b59-297532883826"). InnerVolumeSpecName "kube-api-access-nvqx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.412230 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvqx6\" (UniqueName: \"kubernetes.io/projected/9cdbb05e-d475-48b2-9b59-297532883826-kube-api-access-nvqx6\") on node \"crc\" DevicePath \"\"" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.557717 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cdbb05e-d475-48b2-9b59-297532883826-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9cdbb05e-d475-48b2-9b59-297532883826" (UID: "9cdbb05e-d475-48b2-9b59-297532883826"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.616785 4793 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9cdbb05e-d475-48b2-9b59-297532883826-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.815995 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jg6df_must-gather-x5n45_9cdbb05e-d475-48b2-9b59-297532883826/copy/0.log" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.816553 4793 generic.go:334] "Generic (PLEG): container finished" podID="9cdbb05e-d475-48b2-9b59-297532883826" containerID="4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b" exitCode=143 Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.816637 4793 scope.go:117] "RemoveContainer" containerID="4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.816641 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg6df/must-gather-x5n45" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.836142 4793 scope.go:117] "RemoveContainer" containerID="ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.881655 4793 scope.go:117] "RemoveContainer" containerID="4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b" Jan 30 15:16:53 crc kubenswrapper[4793]: E0130 15:16:53.882129 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b\": container with ID starting with 4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b not found: ID does not exist" containerID="4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.882171 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b"} err="failed to get container status \"4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b\": rpc error: code = NotFound desc = could not find container \"4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b\": container with ID starting with 4bb1492fdf7cb3ef0509b995e6506dd27bf41f378ae5e94d3499945ef4fbc00b not found: ID does not exist" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.882190 4793 scope.go:117] "RemoveContainer" containerID="ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf" Jan 30 15:16:53 crc kubenswrapper[4793]: E0130 15:16:53.882434 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf\": container with ID starting with ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf not found: ID does not exist" containerID="ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf" Jan 30 15:16:53 crc kubenswrapper[4793]: I0130 15:16:53.882467 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf"} err="failed to get container status \"ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf\": rpc error: code = NotFound desc = could not find container \"ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf\": container with ID starting with ba8725e57a48160d5711655ca07530748831f6b24ca6ff31737ea455d8b462cf not found: ID does not exist" Jan 30 15:16:54 crc kubenswrapper[4793]: I0130 15:16:54.408604 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cdbb05e-d475-48b2-9b59-297532883826" path="/var/lib/kubelet/pods/9cdbb05e-d475-48b2-9b59-297532883826/volumes" Jan 30 15:17:00 crc kubenswrapper[4793]: I0130 15:17:00.750369 4793 scope.go:117] "RemoveContainer" containerID="cc41eecc94295c98eb3214210729f1c635aad07b9ddd5ced865321fef6013a0f" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.208233 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rpz58"] Jan 30 15:17:58 crc kubenswrapper[4793]: E0130 15:17:58.209214 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b368057-7309-4308-9956-1850a9297956" containerName="collect-profiles" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.209231 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b368057-7309-4308-9956-1850a9297956" containerName="collect-profiles" Jan 30 15:17:58 crc kubenswrapper[4793]: E0130 15:17:58.209251 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cdbb05e-d475-48b2-9b59-297532883826" containerName="gather" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.209259 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cdbb05e-d475-48b2-9b59-297532883826" containerName="gather" Jan 30 15:17:58 crc kubenswrapper[4793]: E0130 15:17:58.209274 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cdbb05e-d475-48b2-9b59-297532883826" containerName="copy" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.209284 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cdbb05e-d475-48b2-9b59-297532883826" containerName="copy" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.212107 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cdbb05e-d475-48b2-9b59-297532883826" containerName="copy" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.212142 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cdbb05e-d475-48b2-9b59-297532883826" containerName="gather" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.212174 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b368057-7309-4308-9956-1850a9297956" containerName="collect-profiles" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.213802 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.228950 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rpz58"] Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.384120 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwgc8\" (UniqueName: \"kubernetes.io/projected/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-kube-api-access-dwgc8\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.384514 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-utilities\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.384577 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-catalog-content\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.485903 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwgc8\" (UniqueName: \"kubernetes.io/projected/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-kube-api-access-dwgc8\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.486094 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-utilities\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.486113 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-catalog-content\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.488549 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-utilities\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.488661 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-catalog-content\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.509870 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwgc8\" (UniqueName: \"kubernetes.io/projected/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-kube-api-access-dwgc8\") pod \"community-operators-rpz58\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:58 crc kubenswrapper[4793]: I0130 15:17:58.599392 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:17:59 crc kubenswrapper[4793]: I0130 15:17:59.199754 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rpz58"] Jan 30 15:17:59 crc kubenswrapper[4793]: I0130 15:17:59.391692 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpz58" event={"ID":"851b6232-0ffd-4c7d-a8ee-fa085e0790f0","Type":"ContainerStarted","Data":"7572d1d4da12bd986bc215ee7e50ae0a56a257908a7d2e2006c6a004836380bd"} Jan 30 15:18:00 crc kubenswrapper[4793]: I0130 15:18:00.401975 4793 generic.go:334] "Generic (PLEG): container finished" podID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerID="c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58" exitCode=0 Jan 30 15:18:00 crc kubenswrapper[4793]: I0130 15:18:00.410905 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpz58" event={"ID":"851b6232-0ffd-4c7d-a8ee-fa085e0790f0","Type":"ContainerDied","Data":"c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58"} Jan 30 15:18:00 crc kubenswrapper[4793]: I0130 15:18:00.411556 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 15:18:00 crc kubenswrapper[4793]: I0130 15:18:00.842550 4793 scope.go:117] "RemoveContainer" containerID="568ed0e82f10baad26d3430efb936eb0714fc3fed75c7084e20ef051683db5ff" Jan 30 15:18:02 crc kubenswrapper[4793]: I0130 15:18:02.422185 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpz58" event={"ID":"851b6232-0ffd-4c7d-a8ee-fa085e0790f0","Type":"ContainerStarted","Data":"6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35"} Jan 30 15:18:03 crc kubenswrapper[4793]: I0130 15:18:03.435178 4793 generic.go:334] "Generic (PLEG): container finished" podID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerID="6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35" exitCode=0 Jan 30 15:18:03 crc kubenswrapper[4793]: I0130 15:18:03.435235 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpz58" event={"ID":"851b6232-0ffd-4c7d-a8ee-fa085e0790f0","Type":"ContainerDied","Data":"6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35"} Jan 30 15:18:04 crc kubenswrapper[4793]: I0130 15:18:04.447093 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpz58" event={"ID":"851b6232-0ffd-4c7d-a8ee-fa085e0790f0","Type":"ContainerStarted","Data":"a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba"} Jan 30 15:18:04 crc kubenswrapper[4793]: I0130 15:18:04.469779 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rpz58" podStartSLOduration=2.667915434 podStartE2EDuration="6.46975986s" podCreationTimestamp="2026-01-30 15:17:58 +0000 UTC" firstStartedPulling="2026-01-30 15:18:00.411348124 +0000 UTC m=+5691.112696615" lastFinishedPulling="2026-01-30 15:18:04.21319255 +0000 UTC m=+5694.914541041" observedRunningTime="2026-01-30 15:18:04.464735657 +0000 UTC m=+5695.166084148" watchObservedRunningTime="2026-01-30 15:18:04.46975986 +0000 UTC m=+5695.171108351" Jan 30 15:18:08 crc kubenswrapper[4793]: I0130 15:18:08.600193 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:18:08 crc kubenswrapper[4793]: I0130 15:18:08.600861 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:18:08 crc kubenswrapper[4793]: I0130 15:18:08.645749 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:18:09 crc kubenswrapper[4793]: I0130 15:18:09.552887 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:18:09 crc kubenswrapper[4793]: I0130 15:18:09.625870 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rpz58"] Jan 30 15:18:11 crc kubenswrapper[4793]: I0130 15:18:11.515672 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rpz58" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerName="registry-server" containerID="cri-o://a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba" gracePeriod=2 Jan 30 15:18:11 crc kubenswrapper[4793]: I0130 15:18:11.932826 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.056553 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-catalog-content\") pod \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.056819 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-utilities\") pod \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.056916 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwgc8\" (UniqueName: \"kubernetes.io/projected/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-kube-api-access-dwgc8\") pod \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\" (UID: \"851b6232-0ffd-4c7d-a8ee-fa085e0790f0\") " Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.057797 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-utilities" (OuterVolumeSpecName: "utilities") pod "851b6232-0ffd-4c7d-a8ee-fa085e0790f0" (UID: "851b6232-0ffd-4c7d-a8ee-fa085e0790f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.063401 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-kube-api-access-dwgc8" (OuterVolumeSpecName: "kube-api-access-dwgc8") pod "851b6232-0ffd-4c7d-a8ee-fa085e0790f0" (UID: "851b6232-0ffd-4c7d-a8ee-fa085e0790f0"). InnerVolumeSpecName "kube-api-access-dwgc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.119321 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "851b6232-0ffd-4c7d-a8ee-fa085e0790f0" (UID: "851b6232-0ffd-4c7d-a8ee-fa085e0790f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.159331 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.159365 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwgc8\" (UniqueName: \"kubernetes.io/projected/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-kube-api-access-dwgc8\") on node \"crc\" DevicePath \"\"" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.159378 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/851b6232-0ffd-4c7d-a8ee-fa085e0790f0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.529101 4793 generic.go:334] "Generic (PLEG): container finished" podID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerID="a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba" exitCode=0 Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.529141 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpz58" event={"ID":"851b6232-0ffd-4c7d-a8ee-fa085e0790f0","Type":"ContainerDied","Data":"a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba"} Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.529166 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpz58" event={"ID":"851b6232-0ffd-4c7d-a8ee-fa085e0790f0","Type":"ContainerDied","Data":"7572d1d4da12bd986bc215ee7e50ae0a56a257908a7d2e2006c6a004836380bd"} Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.529181 4793 scope.go:117] "RemoveContainer" containerID="a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.529298 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpz58" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.550713 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rpz58"] Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.562232 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rpz58"] Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.571660 4793 scope.go:117] "RemoveContainer" containerID="6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.595032 4793 scope.go:117] "RemoveContainer" containerID="c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.642344 4793 scope.go:117] "RemoveContainer" containerID="a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba" Jan 30 15:18:12 crc kubenswrapper[4793]: E0130 15:18:12.642784 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba\": container with ID starting with a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba not found: ID does not exist" containerID="a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.642835 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba"} err="failed to get container status \"a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba\": rpc error: code = NotFound desc = could not find container \"a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba\": container with ID starting with a0490e172f0580b647ffe1fdf53b1285b718b9a296f2787af6cffa03e52fcbba not found: ID does not exist" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.642862 4793 scope.go:117] "RemoveContainer" containerID="6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35" Jan 30 15:18:12 crc kubenswrapper[4793]: E0130 15:18:12.643551 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35\": container with ID starting with 6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35 not found: ID does not exist" containerID="6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.643599 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35"} err="failed to get container status \"6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35\": rpc error: code = NotFound desc = could not find container \"6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35\": container with ID starting with 6566749bcd550b86024e661975e599d96cd02981b60713f6748c3be5eb63de35 not found: ID does not exist" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.643644 4793 scope.go:117] "RemoveContainer" containerID="c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58" Jan 30 15:18:12 crc kubenswrapper[4793]: E0130 15:18:12.644180 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58\": container with ID starting with c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58 not found: ID does not exist" containerID="c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58" Jan 30 15:18:12 crc kubenswrapper[4793]: I0130 15:18:12.644214 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58"} err="failed to get container status \"c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58\": rpc error: code = NotFound desc = could not find container \"c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58\": container with ID starting with c6114fbb1b7bea351ea2e2e0689d55dc585a9e12a6f86a110064ecdd29b53f58 not found: ID does not exist" Jan 30 15:18:14 crc kubenswrapper[4793]: I0130 15:18:14.411595 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" path="/var/lib/kubelet/pods/851b6232-0ffd-4c7d-a8ee-fa085e0790f0/volumes" Jan 30 15:18:42 crc kubenswrapper[4793]: I0130 15:18:42.413869 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:18:42 crc kubenswrapper[4793]: I0130 15:18:42.415321 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:19:12 crc kubenswrapper[4793]: I0130 15:19:12.413424 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:19:12 crc kubenswrapper[4793]: I0130 15:19:12.413928 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:19:42 crc kubenswrapper[4793]: I0130 15:19:42.413947 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:19:42 crc kubenswrapper[4793]: I0130 15:19:42.414768 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:19:42 crc kubenswrapper[4793]: I0130 15:19:42.414830 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 15:19:42 crc kubenswrapper[4793]: I0130 15:19:42.415943 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 15:19:42 crc kubenswrapper[4793]: I0130 15:19:42.416037 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" gracePeriod=600 Jan 30 15:19:42 crc kubenswrapper[4793]: E0130 15:19:42.536455 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:19:43 crc kubenswrapper[4793]: I0130 15:19:43.344027 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" exitCode=0 Jan 30 15:19:43 crc kubenswrapper[4793]: I0130 15:19:43.344075 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce"} Jan 30 15:19:43 crc kubenswrapper[4793]: I0130 15:19:43.344127 4793 scope.go:117] "RemoveContainer" containerID="2e917dcf8d0541fa761d833d92780fc95c344c876dc9aae353982d89d80846a5" Jan 30 15:19:43 crc kubenswrapper[4793]: I0130 15:19:43.344945 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:19:43 crc kubenswrapper[4793]: E0130 15:19:43.345421 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.328978 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5swb7/must-gather-9zdpz"] Jan 30 15:19:54 crc kubenswrapper[4793]: E0130 15:19:54.331110 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerName="registry-server" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.331244 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerName="registry-server" Jan 30 15:19:54 crc kubenswrapper[4793]: E0130 15:19:54.331375 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerName="extract-utilities" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.331456 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerName="extract-utilities" Jan 30 15:19:54 crc kubenswrapper[4793]: E0130 15:19:54.331552 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerName="extract-content" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.331632 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerName="extract-content" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.331963 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="851b6232-0ffd-4c7d-a8ee-fa085e0790f0" containerName="registry-server" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.333386 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.338478 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5swb7"/"openshift-service-ca.crt" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.338749 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5swb7"/"kube-root-ca.crt" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.353803 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm2xv\" (UniqueName: \"kubernetes.io/projected/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-kube-api-access-gm2xv\") pod \"must-gather-9zdpz\" (UID: \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\") " pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.353918 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-must-gather-output\") pod \"must-gather-9zdpz\" (UID: \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\") " pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.455494 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm2xv\" (UniqueName: \"kubernetes.io/projected/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-kube-api-access-gm2xv\") pod \"must-gather-9zdpz\" (UID: \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\") " pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.462410 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-must-gather-output\") pod \"must-gather-9zdpz\" (UID: \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\") " pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.462775 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-must-gather-output\") pod \"must-gather-9zdpz\" (UID: \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\") " pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.467338 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5swb7/must-gather-9zdpz"] Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.488737 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm2xv\" (UniqueName: \"kubernetes.io/projected/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-kube-api-access-gm2xv\") pod \"must-gather-9zdpz\" (UID: \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\") " pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:19:54 crc kubenswrapper[4793]: I0130 15:19:54.664189 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:19:55 crc kubenswrapper[4793]: I0130 15:19:55.202624 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5swb7/must-gather-9zdpz"] Jan 30 15:19:55 crc kubenswrapper[4793]: I0130 15:19:55.455369 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/must-gather-9zdpz" event={"ID":"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72","Type":"ContainerStarted","Data":"bcc4bc21a6c12cae1a4c2db58d26bdd2be9a4e12bd23b3f347d467b22b7270a5"} Jan 30 15:19:55 crc kubenswrapper[4793]: I0130 15:19:55.455600 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/must-gather-9zdpz" event={"ID":"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72","Type":"ContainerStarted","Data":"05ef02364cb3c6cb1aac7a4fce6e06fe6eef6f77fca3776b5d2229196af4cde1"} Jan 30 15:19:56 crc kubenswrapper[4793]: I0130 15:19:56.398682 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:19:56 crc kubenswrapper[4793]: E0130 15:19:56.399387 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:19:56 crc kubenswrapper[4793]: I0130 15:19:56.468300 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/must-gather-9zdpz" event={"ID":"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72","Type":"ContainerStarted","Data":"4941afb1ffe31f3ef59ded56a75fac16d895a4e8c097ba8e151ea8b4f01a6144"} Jan 30 15:19:56 crc kubenswrapper[4793]: I0130 15:19:56.495749 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5swb7/must-gather-9zdpz" podStartSLOduration=2.495726538 podStartE2EDuration="2.495726538s" podCreationTimestamp="2026-01-30 15:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:19:56.492004386 +0000 UTC m=+5807.193352877" watchObservedRunningTime="2026-01-30 15:19:56.495726538 +0000 UTC m=+5807.197075039" Jan 30 15:19:59 crc kubenswrapper[4793]: I0130 15:19:59.783267 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5swb7/crc-debug-czd7z"] Jan 30 15:19:59 crc kubenswrapper[4793]: I0130 15:19:59.785791 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:19:59 crc kubenswrapper[4793]: I0130 15:19:59.788551 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5swb7"/"default-dockercfg-nk8fm" Jan 30 15:19:59 crc kubenswrapper[4793]: I0130 15:19:59.974699 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-kube-api-access-8btrq\") pod \"crc-debug-czd7z\" (UID: \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\") " pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:19:59 crc kubenswrapper[4793]: I0130 15:19:59.975024 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-host\") pod \"crc-debug-czd7z\" (UID: \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\") " pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:20:00 crc kubenswrapper[4793]: I0130 15:20:00.076582 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-kube-api-access-8btrq\") pod \"crc-debug-czd7z\" (UID: \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\") " pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:20:00 crc kubenswrapper[4793]: I0130 15:20:00.076678 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-host\") pod \"crc-debug-czd7z\" (UID: \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\") " pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:20:00 crc kubenswrapper[4793]: I0130 15:20:00.076804 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-host\") pod \"crc-debug-czd7z\" (UID: \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\") " pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:20:00 crc kubenswrapper[4793]: I0130 15:20:00.122277 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-kube-api-access-8btrq\") pod \"crc-debug-czd7z\" (UID: \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\") " pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:20:00 crc kubenswrapper[4793]: I0130 15:20:00.408809 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:20:00 crc kubenswrapper[4793]: W0130 15:20:00.453405 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47ea6d10_0cd4_4c62_aa35_f91c715e4ba4.slice/crio-13bef320d9bc9854ae28f181161db5187282c58aabd94edd9f6a30465dbe0e11 WatchSource:0}: Error finding container 13bef320d9bc9854ae28f181161db5187282c58aabd94edd9f6a30465dbe0e11: Status 404 returned error can't find the container with id 13bef320d9bc9854ae28f181161db5187282c58aabd94edd9f6a30465dbe0e11 Jan 30 15:20:00 crc kubenswrapper[4793]: I0130 15:20:00.516578 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/crc-debug-czd7z" event={"ID":"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4","Type":"ContainerStarted","Data":"13bef320d9bc9854ae28f181161db5187282c58aabd94edd9f6a30465dbe0e11"} Jan 30 15:20:01 crc kubenswrapper[4793]: I0130 15:20:01.526357 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/crc-debug-czd7z" event={"ID":"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4","Type":"ContainerStarted","Data":"86e00e31965f1b3c0ea7cf7b438eeaa03e0e567fc25ab2389b6dc1be13ddc91b"} Jan 30 15:20:01 crc kubenswrapper[4793]: I0130 15:20:01.543731 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5swb7/crc-debug-czd7z" podStartSLOduration=2.543710675 podStartE2EDuration="2.543710675s" podCreationTimestamp="2026-01-30 15:19:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:20:01.542772882 +0000 UTC m=+5812.244121383" watchObservedRunningTime="2026-01-30 15:20:01.543710675 +0000 UTC m=+5812.245059166" Jan 30 15:20:08 crc kubenswrapper[4793]: I0130 15:20:08.399363 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:20:08 crc kubenswrapper[4793]: E0130 15:20:08.400349 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:20:23 crc kubenswrapper[4793]: I0130 15:20:23.398982 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:20:23 crc kubenswrapper[4793]: E0130 15:20:23.401818 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:20:34 crc kubenswrapper[4793]: I0130 15:20:34.401578 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:20:34 crc kubenswrapper[4793]: E0130 15:20:34.402689 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:20:45 crc kubenswrapper[4793]: I0130 15:20:45.398028 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:20:45 crc kubenswrapper[4793]: E0130 15:20:45.398793 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:20:47 crc kubenswrapper[4793]: I0130 15:20:47.931745 4793 generic.go:334] "Generic (PLEG): container finished" podID="47ea6d10-0cd4-4c62-aa35-f91c715e4ba4" containerID="86e00e31965f1b3c0ea7cf7b438eeaa03e0e567fc25ab2389b6dc1be13ddc91b" exitCode=0 Jan 30 15:20:47 crc kubenswrapper[4793]: I0130 15:20:47.931839 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/crc-debug-czd7z" event={"ID":"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4","Type":"ContainerDied","Data":"86e00e31965f1b3c0ea7cf7b438eeaa03e0e567fc25ab2389b6dc1be13ddc91b"} Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.040915 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.079479 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5swb7/crc-debug-czd7z"] Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.087284 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5swb7/crc-debug-czd7z"] Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.214892 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-host\") pod \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\" (UID: \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\") " Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.215060 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-host" (OuterVolumeSpecName: "host") pod "47ea6d10-0cd4-4c62-aa35-f91c715e4ba4" (UID: "47ea6d10-0cd4-4c62-aa35-f91c715e4ba4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.215149 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-kube-api-access-8btrq\") pod \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\" (UID: \"47ea6d10-0cd4-4c62-aa35-f91c715e4ba4\") " Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.215596 4793 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-host\") on node \"crc\" DevicePath \"\"" Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.221294 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-kube-api-access-8btrq" (OuterVolumeSpecName: "kube-api-access-8btrq") pod "47ea6d10-0cd4-4c62-aa35-f91c715e4ba4" (UID: "47ea6d10-0cd4-4c62-aa35-f91c715e4ba4"). InnerVolumeSpecName "kube-api-access-8btrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.317571 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4-kube-api-access-8btrq\") on node \"crc\" DevicePath \"\"" Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.947985 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13bef320d9bc9854ae28f181161db5187282c58aabd94edd9f6a30465dbe0e11" Jan 30 15:20:49 crc kubenswrapper[4793]: I0130 15:20:49.948084 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-czd7z" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.366274 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5swb7/crc-debug-tl96s"] Jan 30 15:20:50 crc kubenswrapper[4793]: E0130 15:20:50.366682 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ea6d10-0cd4-4c62-aa35-f91c715e4ba4" containerName="container-00" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.366694 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ea6d10-0cd4-4c62-aa35-f91c715e4ba4" containerName="container-00" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.366899 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="47ea6d10-0cd4-4c62-aa35-f91c715e4ba4" containerName="container-00" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.367478 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.371259 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5swb7"/"default-dockercfg-nk8fm" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.410240 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47ea6d10-0cd4-4c62-aa35-f91c715e4ba4" path="/var/lib/kubelet/pods/47ea6d10-0cd4-4c62-aa35-f91c715e4ba4/volumes" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.539557 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2407e444-c4b6-488a-b397-10febd8cdf44-host\") pod \"crc-debug-tl96s\" (UID: \"2407e444-c4b6-488a-b397-10febd8cdf44\") " pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.539600 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrm72\" (UniqueName: \"kubernetes.io/projected/2407e444-c4b6-488a-b397-10febd8cdf44-kube-api-access-nrm72\") pod \"crc-debug-tl96s\" (UID: \"2407e444-c4b6-488a-b397-10febd8cdf44\") " pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.641805 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2407e444-c4b6-488a-b397-10febd8cdf44-host\") pod \"crc-debug-tl96s\" (UID: \"2407e444-c4b6-488a-b397-10febd8cdf44\") " pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.641874 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrm72\" (UniqueName: \"kubernetes.io/projected/2407e444-c4b6-488a-b397-10febd8cdf44-kube-api-access-nrm72\") pod \"crc-debug-tl96s\" (UID: \"2407e444-c4b6-488a-b397-10febd8cdf44\") " pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.641949 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2407e444-c4b6-488a-b397-10febd8cdf44-host\") pod \"crc-debug-tl96s\" (UID: \"2407e444-c4b6-488a-b397-10febd8cdf44\") " pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.673013 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrm72\" (UniqueName: \"kubernetes.io/projected/2407e444-c4b6-488a-b397-10febd8cdf44-kube-api-access-nrm72\") pod \"crc-debug-tl96s\" (UID: \"2407e444-c4b6-488a-b397-10febd8cdf44\") " pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.683925 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:50 crc kubenswrapper[4793]: I0130 15:20:50.956767 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/crc-debug-tl96s" event={"ID":"2407e444-c4b6-488a-b397-10febd8cdf44","Type":"ContainerStarted","Data":"af38a403c914f66e3391e16b5d16fd2af804d00b0066101c8b2d179624a3dc49"} Jan 30 15:20:51 crc kubenswrapper[4793]: E0130 15:20:51.342482 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2407e444_c4b6_488a_b397_10febd8cdf44.slice/crio-85e030152ec5fa9dd3b51151a0867969b87294517f632303c2c8686222780d3f.scope\": RecentStats: unable to find data in memory cache]" Jan 30 15:20:51 crc kubenswrapper[4793]: I0130 15:20:51.967575 4793 generic.go:334] "Generic (PLEG): container finished" podID="2407e444-c4b6-488a-b397-10febd8cdf44" containerID="85e030152ec5fa9dd3b51151a0867969b87294517f632303c2c8686222780d3f" exitCode=0 Jan 30 15:20:51 crc kubenswrapper[4793]: I0130 15:20:51.967636 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/crc-debug-tl96s" event={"ID":"2407e444-c4b6-488a-b397-10febd8cdf44","Type":"ContainerDied","Data":"85e030152ec5fa9dd3b51151a0867969b87294517f632303c2c8686222780d3f"} Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.076642 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.194207 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2407e444-c4b6-488a-b397-10febd8cdf44-host\") pod \"2407e444-c4b6-488a-b397-10febd8cdf44\" (UID: \"2407e444-c4b6-488a-b397-10febd8cdf44\") " Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.194270 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrm72\" (UniqueName: \"kubernetes.io/projected/2407e444-c4b6-488a-b397-10febd8cdf44-kube-api-access-nrm72\") pod \"2407e444-c4b6-488a-b397-10febd8cdf44\" (UID: \"2407e444-c4b6-488a-b397-10febd8cdf44\") " Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.194540 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2407e444-c4b6-488a-b397-10febd8cdf44-host" (OuterVolumeSpecName: "host") pod "2407e444-c4b6-488a-b397-10febd8cdf44" (UID: "2407e444-c4b6-488a-b397-10febd8cdf44"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.194779 4793 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2407e444-c4b6-488a-b397-10febd8cdf44-host\") on node \"crc\" DevicePath \"\"" Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.205274 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2407e444-c4b6-488a-b397-10febd8cdf44-kube-api-access-nrm72" (OuterVolumeSpecName: "kube-api-access-nrm72") pod "2407e444-c4b6-488a-b397-10febd8cdf44" (UID: "2407e444-c4b6-488a-b397-10febd8cdf44"). InnerVolumeSpecName "kube-api-access-nrm72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.296218 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrm72\" (UniqueName: \"kubernetes.io/projected/2407e444-c4b6-488a-b397-10febd8cdf44-kube-api-access-nrm72\") on node \"crc\" DevicePath \"\"" Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.885090 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5swb7/crc-debug-tl96s"] Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.896085 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5swb7/crc-debug-tl96s"] Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.983102 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af38a403c914f66e3391e16b5d16fd2af804d00b0066101c8b2d179624a3dc49" Jan 30 15:20:53 crc kubenswrapper[4793]: I0130 15:20:53.983205 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-tl96s" Jan 30 15:20:54 crc kubenswrapper[4793]: I0130 15:20:54.409364 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2407e444-c4b6-488a-b397-10febd8cdf44" path="/var/lib/kubelet/pods/2407e444-c4b6-488a-b397-10febd8cdf44/volumes" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.109371 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5swb7/crc-debug-g44lj"] Jan 30 15:20:55 crc kubenswrapper[4793]: E0130 15:20:55.110182 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2407e444-c4b6-488a-b397-10febd8cdf44" containerName="container-00" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.110212 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2407e444-c4b6-488a-b397-10febd8cdf44" containerName="container-00" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.110555 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2407e444-c4b6-488a-b397-10febd8cdf44" containerName="container-00" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.111402 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.113608 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5swb7"/"default-dockercfg-nk8fm" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.230663 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2ks9\" (UniqueName: \"kubernetes.io/projected/6d975c3f-305f-4a75-9776-5a5c98e567f3-kube-api-access-z2ks9\") pod \"crc-debug-g44lj\" (UID: \"6d975c3f-305f-4a75-9776-5a5c98e567f3\") " pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.230973 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d975c3f-305f-4a75-9776-5a5c98e567f3-host\") pod \"crc-debug-g44lj\" (UID: \"6d975c3f-305f-4a75-9776-5a5c98e567f3\") " pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.332590 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2ks9\" (UniqueName: \"kubernetes.io/projected/6d975c3f-305f-4a75-9776-5a5c98e567f3-kube-api-access-z2ks9\") pod \"crc-debug-g44lj\" (UID: \"6d975c3f-305f-4a75-9776-5a5c98e567f3\") " pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.333250 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d975c3f-305f-4a75-9776-5a5c98e567f3-host\") pod \"crc-debug-g44lj\" (UID: \"6d975c3f-305f-4a75-9776-5a5c98e567f3\") " pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.333381 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d975c3f-305f-4a75-9776-5a5c98e567f3-host\") pod \"crc-debug-g44lj\" (UID: \"6d975c3f-305f-4a75-9776-5a5c98e567f3\") " pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.349368 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2ks9\" (UniqueName: \"kubernetes.io/projected/6d975c3f-305f-4a75-9776-5a5c98e567f3-kube-api-access-z2ks9\") pod \"crc-debug-g44lj\" (UID: \"6d975c3f-305f-4a75-9776-5a5c98e567f3\") " pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:55 crc kubenswrapper[4793]: I0130 15:20:55.426981 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:55 crc kubenswrapper[4793]: W0130 15:20:55.455199 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d975c3f_305f_4a75_9776_5a5c98e567f3.slice/crio-4b070602a573fe785bf9994bac800be9fc5273e7b2c0faa1075420ace1133a21 WatchSource:0}: Error finding container 4b070602a573fe785bf9994bac800be9fc5273e7b2c0faa1075420ace1133a21: Status 404 returned error can't find the container with id 4b070602a573fe785bf9994bac800be9fc5273e7b2c0faa1075420ace1133a21 Jan 30 15:20:56 crc kubenswrapper[4793]: I0130 15:20:56.002670 4793 generic.go:334] "Generic (PLEG): container finished" podID="6d975c3f-305f-4a75-9776-5a5c98e567f3" containerID="437d7045fe7a0e2d3b1219fd70c03224ab5b83cded85d2ea40b54b54f24df894" exitCode=0 Jan 30 15:20:56 crc kubenswrapper[4793]: I0130 15:20:56.003030 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/crc-debug-g44lj" event={"ID":"6d975c3f-305f-4a75-9776-5a5c98e567f3","Type":"ContainerDied","Data":"437d7045fe7a0e2d3b1219fd70c03224ab5b83cded85d2ea40b54b54f24df894"} Jan 30 15:20:56 crc kubenswrapper[4793]: I0130 15:20:56.003137 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/crc-debug-g44lj" event={"ID":"6d975c3f-305f-4a75-9776-5a5c98e567f3","Type":"ContainerStarted","Data":"4b070602a573fe785bf9994bac800be9fc5273e7b2c0faa1075420ace1133a21"} Jan 30 15:20:56 crc kubenswrapper[4793]: I0130 15:20:56.043843 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5swb7/crc-debug-g44lj"] Jan 30 15:20:56 crc kubenswrapper[4793]: I0130 15:20:56.051595 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5swb7/crc-debug-g44lj"] Jan 30 15:20:57 crc kubenswrapper[4793]: I0130 15:20:57.106571 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:57 crc kubenswrapper[4793]: I0130 15:20:57.167011 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2ks9\" (UniqueName: \"kubernetes.io/projected/6d975c3f-305f-4a75-9776-5a5c98e567f3-kube-api-access-z2ks9\") pod \"6d975c3f-305f-4a75-9776-5a5c98e567f3\" (UID: \"6d975c3f-305f-4a75-9776-5a5c98e567f3\") " Jan 30 15:20:57 crc kubenswrapper[4793]: I0130 15:20:57.167230 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d975c3f-305f-4a75-9776-5a5c98e567f3-host\") pod \"6d975c3f-305f-4a75-9776-5a5c98e567f3\" (UID: \"6d975c3f-305f-4a75-9776-5a5c98e567f3\") " Jan 30 15:20:57 crc kubenswrapper[4793]: I0130 15:20:57.167358 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d975c3f-305f-4a75-9776-5a5c98e567f3-host" (OuterVolumeSpecName: "host") pod "6d975c3f-305f-4a75-9776-5a5c98e567f3" (UID: "6d975c3f-305f-4a75-9776-5a5c98e567f3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:20:57 crc kubenswrapper[4793]: I0130 15:20:57.167716 4793 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d975c3f-305f-4a75-9776-5a5c98e567f3-host\") on node \"crc\" DevicePath \"\"" Jan 30 15:20:57 crc kubenswrapper[4793]: I0130 15:20:57.175312 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d975c3f-305f-4a75-9776-5a5c98e567f3-kube-api-access-z2ks9" (OuterVolumeSpecName: "kube-api-access-z2ks9") pod "6d975c3f-305f-4a75-9776-5a5c98e567f3" (UID: "6d975c3f-305f-4a75-9776-5a5c98e567f3"). InnerVolumeSpecName "kube-api-access-z2ks9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:20:57 crc kubenswrapper[4793]: I0130 15:20:57.269272 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2ks9\" (UniqueName: \"kubernetes.io/projected/6d975c3f-305f-4a75-9776-5a5c98e567f3-kube-api-access-z2ks9\") on node \"crc\" DevicePath \"\"" Jan 30 15:20:58 crc kubenswrapper[4793]: I0130 15:20:58.023509 4793 scope.go:117] "RemoveContainer" containerID="437d7045fe7a0e2d3b1219fd70c03224ab5b83cded85d2ea40b54b54f24df894" Jan 30 15:20:58 crc kubenswrapper[4793]: I0130 15:20:58.023682 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/crc-debug-g44lj" Jan 30 15:20:58 crc kubenswrapper[4793]: I0130 15:20:58.409122 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d975c3f-305f-4a75-9776-5a5c98e567f3" path="/var/lib/kubelet/pods/6d975c3f-305f-4a75-9776-5a5c98e567f3/volumes" Jan 30 15:21:00 crc kubenswrapper[4793]: I0130 15:21:00.403361 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:21:00 crc kubenswrapper[4793]: E0130 15:21:00.403879 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:21:13 crc kubenswrapper[4793]: I0130 15:21:13.398130 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:21:13 crc kubenswrapper[4793]: E0130 15:21:13.398885 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:21:25 crc kubenswrapper[4793]: I0130 15:21:25.398217 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:21:25 crc kubenswrapper[4793]: E0130 15:21:25.399132 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:21:36 crc kubenswrapper[4793]: I0130 15:21:36.422078 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:21:36 crc kubenswrapper[4793]: E0130 15:21:36.423448 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:21:44 crc kubenswrapper[4793]: I0130 15:21:44.906314 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-577797dd7d-dhrt2_a389d76c-e0de-4b8d-84b2-82aedd050f7f/barbican-api/0.log" Jan 30 15:21:44 crc kubenswrapper[4793]: I0130 15:21:44.988647 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-577797dd7d-dhrt2_a389d76c-e0de-4b8d-84b2-82aedd050f7f/barbican-api-log/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.174554 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6dd7f7f8-htnvl_af929740-592b-4d7f-9c99-061df6882206/barbican-keystone-listener/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.221296 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6dd7f7f8-htnvl_af929740-592b-4d7f-9c99-061df6882206/barbican-keystone-listener-log/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.284085 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-d78d76787-7f5jh_653cedf2-2880-49ff-b177-8974b9f0ecdf/barbican-worker/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.393864 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-d78d76787-7f5jh_653cedf2-2880-49ff-b177-8974b9f0ecdf/barbican-worker-log/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.539061 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-k7kn6_2ba6b544-0042-43d7-abe9-bc40439f804b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.679194 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d/ceilometer-central-agent/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.739792 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d/ceilometer-notification-agent/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.828256 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d/proxy-httpd/0.log" Jan 30 15:21:45 crc kubenswrapper[4793]: I0130 15:21:45.845196 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d/sg-core/0.log" Jan 30 15:21:46 crc kubenswrapper[4793]: I0130 15:21:46.055683 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3105dc9e-c178-4799-a658-044d4d9b8312/cinder-api-log/0.log" Jan 30 15:21:46 crc kubenswrapper[4793]: I0130 15:21:46.115949 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3105dc9e-c178-4799-a658-044d4d9b8312/cinder-api/0.log" Jan 30 15:21:46 crc kubenswrapper[4793]: I0130 15:21:46.290033 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_83e26b73-5483-4b6c-88cd-5d794f14ef5a/cinder-scheduler/0.log" Jan 30 15:21:46 crc kubenswrapper[4793]: I0130 15:21:46.431078 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_83e26b73-5483-4b6c-88cd-5d794f14ef5a/probe/0.log" Jan 30 15:21:46 crc kubenswrapper[4793]: I0130 15:21:46.500625 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hgwmc_260f1ea9-6ba5-40aa-ab56-e95237cb1009/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:46 crc kubenswrapper[4793]: I0130 15:21:46.688577 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-jchk2_44f4e8fd-4511-4670-944a-e37dfc6238c8/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:46 crc kubenswrapper[4793]: I0130 15:21:46.728451 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-5bm62_b3e8eb28-c303-409b-a89b-b273b2f56fff/init/0.log" Jan 30 15:21:46 crc kubenswrapper[4793]: I0130 15:21:46.920018 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-5bm62_b3e8eb28-c303-409b-a89b-b273b2f56fff/init/0.log" Jan 30 15:21:47 crc kubenswrapper[4793]: I0130 15:21:47.022958 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-qgztn_f1632f4b-e0e5-4069-a77b-ae4f1911869b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:47 crc kubenswrapper[4793]: I0130 15:21:47.203545 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-5bm62_b3e8eb28-c303-409b-a89b-b273b2f56fff/dnsmasq-dns/0.log" Jan 30 15:21:47 crc kubenswrapper[4793]: I0130 15:21:47.307273 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ae7d1df8-4b0f-46f7-85f4-e24fd65a919d/glance-log/0.log" Jan 30 15:21:47 crc kubenswrapper[4793]: I0130 15:21:47.310865 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ae7d1df8-4b0f-46f7-85f4-e24fd65a919d/glance-httpd/0.log" Jan 30 15:21:47 crc kubenswrapper[4793]: I0130 15:21:47.487169 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f96d1ae8-18a5-4651-b460-21e9ddb50684/glance-httpd/0.log" Jan 30 15:21:47 crc kubenswrapper[4793]: I0130 15:21:47.538664 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f96d1ae8-18a5-4651-b460-21e9ddb50684/glance-log/0.log" Jan 30 15:21:47 crc kubenswrapper[4793]: I0130 15:21:47.867842 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9fc5f8f6-nj7xv_7c37d49c-cbd6-47d6-8f29-51ec6fac2f61/horizon/1.log" Jan 30 15:21:47 crc kubenswrapper[4793]: I0130 15:21:47.896089 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9fc5f8f6-nj7xv_7c37d49c-cbd6-47d6-8f29-51ec6fac2f61/horizon/2.log" Jan 30 15:21:48 crc kubenswrapper[4793]: I0130 15:21:48.313204 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-bvwnp_ae4f8964-b104-43bb-8356-bb53a9635527/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:48 crc kubenswrapper[4793]: I0130 15:21:48.421305 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lqrxr_1ee9c552-088f-4e61-961e-7062bf6e874b/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:48 crc kubenswrapper[4793]: I0130 15:21:48.446789 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5b9fc5f8f6-nj7xv_7c37d49c-cbd6-47d6-8f29-51ec6fac2f61/horizon-log/0.log" Jan 30 15:21:48 crc kubenswrapper[4793]: I0130 15:21:48.577344 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496421-n28p5_617a2857-c4b0-4558-9834-551a98cd534f/keystone-cron/0.log" Jan 30 15:21:48 crc kubenswrapper[4793]: I0130 15:21:48.883847 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a3625667-be35-4d81-84f9-e00593f1c627/kube-state-metrics/0.log" Jan 30 15:21:49 crc kubenswrapper[4793]: I0130 15:21:49.216979 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-8f9s2_96926233-9ce4-4a0b-bab4-d0c4fa90389b/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:49 crc kubenswrapper[4793]: I0130 15:21:49.231145 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-d689db86f-zslsz_0ed57c3d-4992-4cfa-8655-1587b5897df6/keystone-api/0.log" Jan 30 15:21:49 crc kubenswrapper[4793]: I0130 15:21:49.398978 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:21:49 crc kubenswrapper[4793]: E0130 15:21:49.399236 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:21:50 crc kubenswrapper[4793]: I0130 15:21:50.206884 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fngmk_92b9de01-1b86-4b7b-ae4f-98ef7dcfa9b5/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:50 crc kubenswrapper[4793]: I0130 15:21:50.413694 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-668ffd44cc-lhns4_d9f34138-4dce-415b-ad20-cf0ba588f012/neutron-httpd/0.log" Jan 30 15:21:50 crc kubenswrapper[4793]: I0130 15:21:50.721863 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-668ffd44cc-lhns4_d9f34138-4dce-415b-ad20-cf0ba588f012/neutron-api/0.log" Jan 30 15:21:51 crc kubenswrapper[4793]: I0130 15:21:51.501322 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_9f6d85c2-6366-4c7d-a49d-cbb0f5d36fb7/nova-cell0-conductor-conductor/0.log" Jan 30 15:21:51 crc kubenswrapper[4793]: I0130 15:21:51.958281 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d2acd609-26c0-4b98-861f-a8b12fcd07bf/nova-cell1-conductor-conductor/0.log" Jan 30 15:21:52 crc kubenswrapper[4793]: I0130 15:21:52.218997 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b4991f7-e6e6-4dfd-a75b-25a7506591e1/nova-api-log/0.log" Jan 30 15:21:52 crc kubenswrapper[4793]: I0130 15:21:52.287006 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_abaabb74-42dd-40b6-9cb7-69db46f235df/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 15:21:52 crc kubenswrapper[4793]: I0130 15:21:52.607865 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-sk8t8_dfc4d2ba-0414-4f1e-8733-a75d39218ef8/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:52 crc kubenswrapper[4793]: I0130 15:21:52.671986 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_02223b96-2b8b-4d32-b7ba-9cb517e03f13/nova-metadata-log/0.log" Jan 30 15:21:52 crc kubenswrapper[4793]: I0130 15:21:52.935598 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b4991f7-e6e6-4dfd-a75b-25a7506591e1/nova-api-api/0.log" Jan 30 15:21:53 crc kubenswrapper[4793]: I0130 15:21:53.599551 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_41e0025f-6abc-4554-b7a0-c132607aec86/mysql-bootstrap/0.log" Jan 30 15:21:53 crc kubenswrapper[4793]: I0130 15:21:53.840855 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_41e0025f-6abc-4554-b7a0-c132607aec86/mysql-bootstrap/0.log" Jan 30 15:21:53 crc kubenswrapper[4793]: I0130 15:21:53.853661 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_41e0025f-6abc-4554-b7a0-c132607aec86/galera/0.log" Jan 30 15:21:54 crc kubenswrapper[4793]: I0130 15:21:54.146828 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f45b0069-4cb7-4dfd-ac2d-1473cacbde1f/mysql-bootstrap/0.log" Jan 30 15:21:54 crc kubenswrapper[4793]: I0130 15:21:54.310421 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_9e04e820-112a-4afa-b908-f9b8be3e9e7c/nova-scheduler-scheduler/0.log" Jan 30 15:21:54 crc kubenswrapper[4793]: I0130 15:21:54.575412 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f45b0069-4cb7-4dfd-ac2d-1473cacbde1f/mysql-bootstrap/0.log" Jan 30 15:21:54 crc kubenswrapper[4793]: I0130 15:21:54.657676 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f45b0069-4cb7-4dfd-ac2d-1473cacbde1f/galera/0.log" Jan 30 15:21:54 crc kubenswrapper[4793]: I0130 15:21:54.948177 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_dddf6ae2-9b16-4f3e-ba0c-9fed005e44e7/openstackclient/0.log" Jan 30 15:21:54 crc kubenswrapper[4793]: I0130 15:21:54.962736 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-45fd5_230700ff-5087-4d0d-9d93-90b597d2ef72/ovn-controller/0.log" Jan 30 15:21:55 crc kubenswrapper[4793]: I0130 15:21:55.319549 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vx7z5_2eaf3033-e5f4-48bc-bdee-b7d97e57e765/openstack-network-exporter/0.log" Jan 30 15:21:55 crc kubenswrapper[4793]: I0130 15:21:55.636707 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56x4d_f6d71a04-6d3d-4444-9963-950135c3d6da/ovsdb-server-init/0.log" Jan 30 15:21:55 crc kubenswrapper[4793]: I0130 15:21:55.891099 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56x4d_f6d71a04-6d3d-4444-9963-950135c3d6da/ovs-vswitchd/0.log" Jan 30 15:21:55 crc kubenswrapper[4793]: I0130 15:21:55.900939 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_02223b96-2b8b-4d32-b7ba-9cb517e03f13/nova-metadata-metadata/0.log" Jan 30 15:21:55 crc kubenswrapper[4793]: I0130 15:21:55.903428 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56x4d_f6d71a04-6d3d-4444-9963-950135c3d6da/ovsdb-server/0.log" Jan 30 15:21:55 crc kubenswrapper[4793]: I0130 15:21:55.946030 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-56x4d_f6d71a04-6d3d-4444-9963-950135c3d6da/ovsdb-server-init/0.log" Jan 30 15:21:56 crc kubenswrapper[4793]: I0130 15:21:56.200568 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-45sz7_dbd66148-cdd0-4e92-9601-3ef1576a5d3f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:56 crc kubenswrapper[4793]: I0130 15:21:56.361615 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_270527bd-015e-4904-8916-07993e081611/openstack-network-exporter/0.log" Jan 30 15:21:56 crc kubenswrapper[4793]: I0130 15:21:56.558163 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_270527bd-015e-4904-8916-07993e081611/ovn-northd/0.log" Jan 30 15:21:56 crc kubenswrapper[4793]: I0130 15:21:56.644460 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bfa8998b-ee3a-4aea-80e8-c59620a5308a/openstack-network-exporter/0.log" Jan 30 15:21:56 crc kubenswrapper[4793]: I0130 15:21:56.707363 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bfa8998b-ee3a-4aea-80e8-c59620a5308a/ovsdbserver-nb/0.log" Jan 30 15:21:57 crc kubenswrapper[4793]: I0130 15:21:57.168701 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_285be7d6-1f03-43af-8087-46ba257183ec/ovsdbserver-sb/0.log" Jan 30 15:21:57 crc kubenswrapper[4793]: I0130 15:21:57.265890 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_285be7d6-1f03-43af-8087-46ba257183ec/openstack-network-exporter/0.log" Jan 30 15:21:57 crc kubenswrapper[4793]: I0130 15:21:57.713419 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b0247ba-adfd-4195-bf23-91478001fed7/setup-container/0.log" Jan 30 15:21:57 crc kubenswrapper[4793]: I0130 15:21:57.793795 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65f95549b8-wtpxl_57bfc822-1d30-49bc-a077-686b68e9c1e6/placement-api/0.log" Jan 30 15:21:57 crc kubenswrapper[4793]: I0130 15:21:57.835403 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65f95549b8-wtpxl_57bfc822-1d30-49bc-a077-686b68e9c1e6/placement-log/0.log" Jan 30 15:21:57 crc kubenswrapper[4793]: I0130 15:21:57.915517 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b0247ba-adfd-4195-bf23-91478001fed7/setup-container/0.log" Jan 30 15:21:58 crc kubenswrapper[4793]: I0130 15:21:58.072001 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3b0247ba-adfd-4195-bf23-91478001fed7/rabbitmq/0.log" Jan 30 15:21:58 crc kubenswrapper[4793]: I0130 15:21:58.136085 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7ffc0461-9589-45f5-a656-85cc01de58ed/setup-container/0.log" Jan 30 15:21:58 crc kubenswrapper[4793]: I0130 15:21:58.470803 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7ffc0461-9589-45f5-a656-85cc01de58ed/rabbitmq/0.log" Jan 30 15:21:58 crc kubenswrapper[4793]: I0130 15:21:58.521775 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7ffc0461-9589-45f5-a656-85cc01de58ed/setup-container/0.log" Jan 30 15:21:58 crc kubenswrapper[4793]: I0130 15:21:58.550636 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_89e99d15-97ad-4ac5-ba68-82ef88460222/memcached/0.log" Jan 30 15:21:58 crc kubenswrapper[4793]: I0130 15:21:58.551420 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-4s4f7_0538b501-a861-4302-b26e-f5cfb17ed62a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:58 crc kubenswrapper[4793]: I0130 15:21:58.796509 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-t7bl5_b89c70f6-dabd-4984-8f21-235a9ab2f307/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:58 crc kubenswrapper[4793]: I0130 15:21:58.849498 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-qnmv8_03127c65-edbf-41bd-9543-35ae0eddbff6/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.031556 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-j5q58_7915ec77-ca16-4f23-a367-42b525c80284/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.032235 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-nlncv_3cad1dbc-effe-48d8-af45-df0a45e16783/ssh-known-hosts-edpm-deployment/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.287485 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7767cf976c-8m6hn_de3851c3-345e-41a1-ad9e-ee3f4e357d85/proxy-httpd/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.306390 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7767cf976c-8m6hn_de3851c3-345e-41a1-ad9e-ee3f4e357d85/proxy-server/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.433466 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-q459t_50011731-846f-4e86-8664-f9c797dc64ed/swift-ring-rebalance/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.524794 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/account-auditor/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.560867 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/account-reaper/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.709933 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/account-server/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.756313 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/account-replicator/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.827167 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/container-replicator/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.859699 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/container-auditor/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.914946 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/container-server/0.log" Jan 30 15:21:59 crc kubenswrapper[4793]: I0130 15:21:59.988834 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/container-updater/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.096353 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-auditor/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.138966 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-server/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.178629 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-expirer/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.205068 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-replicator/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.255698 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/object-updater/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.378526 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/rsync/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.448568 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_76182868-5b55-403e-a2be-0c6879e9a2e0/swift-recon-cron/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.740841 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-s2rdb_8b1317e1-63f1-4b06-aa31-5df5459c6ce6/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.907568 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_4bf53e2d-d024-4526-ada2-0ee6b461babb/tempest-tests-tempest-tests-runner/0.log" Jan 30 15:22:00 crc kubenswrapper[4793]: I0130 15:22:00.995943 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_8de9d25e-7ca7-4338-a64e-ed95f7bd9de9/test-operator-logs-container/0.log" Jan 30 15:22:01 crc kubenswrapper[4793]: I0130 15:22:01.077167 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-wsmkt_dcc6f491-d722-48e4-bcb8-8a9de7603786/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 15:22:01 crc kubenswrapper[4793]: I0130 15:22:01.398925 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:22:01 crc kubenswrapper[4793]: E0130 15:22:01.399204 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:22:13 crc kubenswrapper[4793]: I0130 15:22:13.397938 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:22:13 crc kubenswrapper[4793]: E0130 15:22:13.398874 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.314104 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4r5cl"] Jan 30 15:22:19 crc kubenswrapper[4793]: E0130 15:22:19.315752 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d975c3f-305f-4a75-9776-5a5c98e567f3" containerName="container-00" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.315828 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d975c3f-305f-4a75-9776-5a5c98e567f3" containerName="container-00" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.316111 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d975c3f-305f-4a75-9776-5a5c98e567f3" containerName="container-00" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.317658 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.329342 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r5cl"] Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.428574 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdjxn\" (UniqueName: \"kubernetes.io/projected/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-kube-api-access-sdjxn\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.429040 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-catalog-content\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.429155 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-utilities\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.531826 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-catalog-content\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.531967 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-utilities\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.532146 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdjxn\" (UniqueName: \"kubernetes.io/projected/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-kube-api-access-sdjxn\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.532762 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-utilities\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.532950 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-catalog-content\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.574589 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdjxn\" (UniqueName: \"kubernetes.io/projected/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-kube-api-access-sdjxn\") pod \"redhat-marketplace-4r5cl\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:19 crc kubenswrapper[4793]: I0130 15:22:19.635222 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:20 crc kubenswrapper[4793]: I0130 15:22:20.183235 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r5cl"] Jan 30 15:22:20 crc kubenswrapper[4793]: I0130 15:22:20.200961 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r5cl" event={"ID":"a9f9d306-0d7d-4586-a327-f32c5cfe12aa","Type":"ContainerStarted","Data":"4b6786f8facf2d6a7b0627908cca7f765498a995e412d74b8f28cd406462599b"} Jan 30 15:22:21 crc kubenswrapper[4793]: I0130 15:22:21.214201 4793 generic.go:334] "Generic (PLEG): container finished" podID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerID="03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911" exitCode=0 Jan 30 15:22:21 crc kubenswrapper[4793]: I0130 15:22:21.214809 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r5cl" event={"ID":"a9f9d306-0d7d-4586-a327-f32c5cfe12aa","Type":"ContainerDied","Data":"03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911"} Jan 30 15:22:23 crc kubenswrapper[4793]: I0130 15:22:23.244400 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r5cl" event={"ID":"a9f9d306-0d7d-4586-a327-f32c5cfe12aa","Type":"ContainerStarted","Data":"c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9"} Jan 30 15:22:24 crc kubenswrapper[4793]: I0130 15:22:24.256839 4793 generic.go:334] "Generic (PLEG): container finished" podID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerID="c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9" exitCode=0 Jan 30 15:22:24 crc kubenswrapper[4793]: I0130 15:22:24.257070 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r5cl" event={"ID":"a9f9d306-0d7d-4586-a327-f32c5cfe12aa","Type":"ContainerDied","Data":"c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9"} Jan 30 15:22:25 crc kubenswrapper[4793]: I0130 15:22:25.287431 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r5cl" event={"ID":"a9f9d306-0d7d-4586-a327-f32c5cfe12aa","Type":"ContainerStarted","Data":"73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8"} Jan 30 15:22:25 crc kubenswrapper[4793]: I0130 15:22:25.318216 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4r5cl" podStartSLOduration=2.7019189150000003 podStartE2EDuration="6.318197609s" podCreationTimestamp="2026-01-30 15:22:19 +0000 UTC" firstStartedPulling="2026-01-30 15:22:21.229092515 +0000 UTC m=+5951.930441006" lastFinishedPulling="2026-01-30 15:22:24.845371209 +0000 UTC m=+5955.546719700" observedRunningTime="2026-01-30 15:22:25.314431045 +0000 UTC m=+5956.015779566" watchObservedRunningTime="2026-01-30 15:22:25.318197609 +0000 UTC m=+5956.019546100" Jan 30 15:22:28 crc kubenswrapper[4793]: I0130 15:22:28.398724 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:22:28 crc kubenswrapper[4793]: E0130 15:22:28.399201 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:22:28 crc kubenswrapper[4793]: I0130 15:22:28.536900 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-8bg6c_ec981da4-a3ba-4e4e-a0eb-2168ab79fe77/manager/0.log" Jan 30 15:22:28 crc kubenswrapper[4793]: I0130 15:22:28.587672 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/util/0.log" Jan 30 15:22:28 crc kubenswrapper[4793]: I0130 15:22:28.819135 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/pull/0.log" Jan 30 15:22:28 crc kubenswrapper[4793]: I0130 15:22:28.824491 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/pull/0.log" Jan 30 15:22:28 crc kubenswrapper[4793]: I0130 15:22:28.870895 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/util/0.log" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.049463 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/util/0.log" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.078821 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/extract/0.log" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.180280 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbbf158dfd22247c7b024bd2a29980f6a19bd01c166babf688be3899a8pfk9l_fa68ea40-d98a-4561-8dce-aa3e81fe5a96/pull/0.log" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.309443 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-hjpkr_6f991e04-2db3-4b32-bc83-8bbce4ce7a08/manager/0.log" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.309567 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-9kwwr_8835e5d9-c37d-4744-95cb-c56c10a58647/manager/0.log" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.635361 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.635678 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.636285 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-k4tz9_8d24cd33-2902-424a-8ffc-76b1e4c2f482/manager/0.log" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.683926 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.701077 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-g5848_1d859404-a29c-46c9-b66a-fed5ff0b13f0/manager/0.log" Jan 30 15:22:29 crc kubenswrapper[4793]: I0130 15:22:29.843663 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-m4q78_710c57e4-a09e-4db1-a03b-13db05085d41/manager/0.log" Jan 30 15:22:30 crc kubenswrapper[4793]: I0130 15:22:30.109967 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-v77jx_7c34e714-0f18-4e41-ab9c-1dfe4859e644/manager/0.log" Jan 30 15:22:30 crc kubenswrapper[4793]: I0130 15:22:30.238948 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-khfs7_97dfa7d1-cc75-4fa1-84cf-7a5d7f5da642/manager/0.log" Jan 30 15:22:30 crc kubenswrapper[4793]: I0130 15:22:30.387370 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:30 crc kubenswrapper[4793]: I0130 15:22:30.411366 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-82cvq_bdcd04f7-09fa-4b1b-8b99-3de61a28a337/manager/0.log" Jan 30 15:22:30 crc kubenswrapper[4793]: I0130 15:22:30.503350 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-9ftxd_ce9be14f-8255-421e-91b4-a30fc5482ff4/manager/0.log" Jan 30 15:22:30 crc kubenswrapper[4793]: I0130 15:22:30.663460 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-n29l5_fa88d14c-0581-439c-9da1-f1123e41a65a/manager/0.log" Jan 30 15:22:30 crc kubenswrapper[4793]: I0130 15:22:30.807481 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-x6pk6_05415bc7-22dc-4b15-a047-6ed62755638d/manager/0.log" Jan 30 15:22:30 crc kubenswrapper[4793]: I0130 15:22:30.977513 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-vtx9d_31ca6ac1-d2da-4325-baa4-e18fc3514721/manager/0.log" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.051284 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-5nsr4_53576ec8-2f6d-4781-8906-726529cc6049/manager/0.log" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.159694 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dlwsrs_e446e97c-6e9f-4dc2-b5fd-fb63451fd326/manager/0.log" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.413042 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-977cfdb67-sp4rd_2cec3782-823b-4ddf-909a-e773203cd721/operator/0.log" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.705751 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nlmdf"] Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.707778 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.726998 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nlmdf"] Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.784455 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-x56zx_e3b6e703-4540-4739-87cd-8699d4e04903/registry-server/0.log" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.789003 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42qfb\" (UniqueName: \"kubernetes.io/projected/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-kube-api-access-42qfb\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.789502 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-catalog-content\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.789921 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-utilities\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.892434 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42qfb\" (UniqueName: \"kubernetes.io/projected/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-kube-api-access-42qfb\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.893785 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-catalog-content\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.893894 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-utilities\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.894661 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-catalog-content\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.897444 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-utilities\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:31 crc kubenswrapper[4793]: I0130 15:22:31.930836 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42qfb\" (UniqueName: \"kubernetes.io/projected/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-kube-api-access-42qfb\") pod \"redhat-operators-nlmdf\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:32 crc kubenswrapper[4793]: I0130 15:22:32.021370 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-4ml88_6231ed92-57a8-4c48-9c75-e916940b22ea/manager/0.log" Jan 30 15:22:32 crc kubenswrapper[4793]: I0130 15:22:32.076967 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:32 crc kubenswrapper[4793]: I0130 15:22:32.520905 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-27flx_02b8e60c-3514-4d72-bde6-5af374a926b1/manager/0.log" Jan 30 15:22:32 crc kubenswrapper[4793]: I0130 15:22:32.679459 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nlmdf"] Jan 30 15:22:32 crc kubenswrapper[4793]: I0130 15:22:32.785089 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-nb4g2_2aae677d-830b-44b8-a792-3d0b527aee89/operator/0.log" Jan 30 15:22:33 crc kubenswrapper[4793]: I0130 15:22:33.019670 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-vxhpt_3eb94c51-d506-4273-898b-dba537cabea6/manager/0.log" Jan 30 15:22:33 crc kubenswrapper[4793]: I0130 15:22:33.027268 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75c5857d49-pm446_e9854850-e645-4364-a471-bef994f8536c/manager/0.log" Jan 30 15:22:33 crc kubenswrapper[4793]: I0130 15:22:33.225292 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-tv5vr_6b21b0ca-d506-4b1b-b6e1-06e2a96ae033/manager/0.log" Jan 30 15:22:33 crc kubenswrapper[4793]: I0130 15:22:33.355877 4793 generic.go:334] "Generic (PLEG): container finished" podID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerID="96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997" exitCode=0 Jan 30 15:22:33 crc kubenswrapper[4793]: I0130 15:22:33.355925 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerDied","Data":"96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997"} Jan 30 15:22:33 crc kubenswrapper[4793]: I0130 15:22:33.355952 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerStarted","Data":"8372c971f9f6c2985247616cba22145cd94668d2cdaaebf62f2b83a40bacf8bb"} Jan 30 15:22:33 crc kubenswrapper[4793]: I0130 15:22:33.454103 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-qb5xp_5e215cef-de14-424d-9028-a48bad979192/manager/0.log" Jan 30 15:22:33 crc kubenswrapper[4793]: I0130 15:22:33.826679 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-btjpp_f65e9448-ee4e-4f22-9bd7-ecf650cb36b5/manager/0.log" Jan 30 15:22:34 crc kubenswrapper[4793]: I0130 15:22:34.365208 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerStarted","Data":"4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f"} Jan 30 15:22:34 crc kubenswrapper[4793]: I0130 15:22:34.490638 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r5cl"] Jan 30 15:22:34 crc kubenswrapper[4793]: I0130 15:22:34.490948 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4r5cl" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerName="registry-server" containerID="cri-o://73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8" gracePeriod=2 Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.017384 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.160467 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-utilities\") pod \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.160531 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-catalog-content\") pod \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.160566 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdjxn\" (UniqueName: \"kubernetes.io/projected/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-kube-api-access-sdjxn\") pod \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\" (UID: \"a9f9d306-0d7d-4586-a327-f32c5cfe12aa\") " Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.161116 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-utilities" (OuterVolumeSpecName: "utilities") pod "a9f9d306-0d7d-4586-a327-f32c5cfe12aa" (UID: "a9f9d306-0d7d-4586-a327-f32c5cfe12aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.181293 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-kube-api-access-sdjxn" (OuterVolumeSpecName: "kube-api-access-sdjxn") pod "a9f9d306-0d7d-4586-a327-f32c5cfe12aa" (UID: "a9f9d306-0d7d-4586-a327-f32c5cfe12aa"). InnerVolumeSpecName "kube-api-access-sdjxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.186518 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9f9d306-0d7d-4586-a327-f32c5cfe12aa" (UID: "a9f9d306-0d7d-4586-a327-f32c5cfe12aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.263227 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.263264 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.263279 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdjxn\" (UniqueName: \"kubernetes.io/projected/a9f9d306-0d7d-4586-a327-f32c5cfe12aa-kube-api-access-sdjxn\") on node \"crc\" DevicePath \"\"" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.375949 4793 generic.go:334] "Generic (PLEG): container finished" podID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerID="73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8" exitCode=0 Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.376014 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4r5cl" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.376041 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r5cl" event={"ID":"a9f9d306-0d7d-4586-a327-f32c5cfe12aa","Type":"ContainerDied","Data":"73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8"} Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.376102 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r5cl" event={"ID":"a9f9d306-0d7d-4586-a327-f32c5cfe12aa","Type":"ContainerDied","Data":"4b6786f8facf2d6a7b0627908cca7f765498a995e412d74b8f28cd406462599b"} Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.376121 4793 scope.go:117] "RemoveContainer" containerID="73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.401171 4793 scope.go:117] "RemoveContainer" containerID="c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.432501 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r5cl"] Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.445209 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r5cl"] Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.450827 4793 scope.go:117] "RemoveContainer" containerID="03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.499847 4793 scope.go:117] "RemoveContainer" containerID="73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8" Jan 30 15:22:35 crc kubenswrapper[4793]: E0130 15:22:35.502627 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8\": container with ID starting with 73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8 not found: ID does not exist" containerID="73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.502672 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8"} err="failed to get container status \"73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8\": rpc error: code = NotFound desc = could not find container \"73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8\": container with ID starting with 73d031fe0803e37d6cbf268c7840f96cbc334482935b6f26641f6a4675681cb8 not found: ID does not exist" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.502699 4793 scope.go:117] "RemoveContainer" containerID="c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9" Jan 30 15:22:35 crc kubenswrapper[4793]: E0130 15:22:35.503444 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9\": container with ID starting with c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9 not found: ID does not exist" containerID="c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.503491 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9"} err="failed to get container status \"c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9\": rpc error: code = NotFound desc = could not find container \"c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9\": container with ID starting with c7a53d8b5355574a294bee8871e440e98d12d7da1e989c92a2201e0529d93af9 not found: ID does not exist" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.503521 4793 scope.go:117] "RemoveContainer" containerID="03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911" Jan 30 15:22:35 crc kubenswrapper[4793]: E0130 15:22:35.504330 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911\": container with ID starting with 03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911 not found: ID does not exist" containerID="03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911" Jan 30 15:22:35 crc kubenswrapper[4793]: I0130 15:22:35.504368 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911"} err="failed to get container status \"03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911\": rpc error: code = NotFound desc = could not find container \"03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911\": container with ID starting with 03cd76178a8bb73465a7ff8f0ac01f9f87acc939962ac295d2d4a92cae8b7911 not found: ID does not exist" Jan 30 15:22:36 crc kubenswrapper[4793]: I0130 15:22:36.409683 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" path="/var/lib/kubelet/pods/a9f9d306-0d7d-4586-a327-f32c5cfe12aa/volumes" Jan 30 15:22:42 crc kubenswrapper[4793]: I0130 15:22:42.398235 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:22:42 crc kubenswrapper[4793]: E0130 15:22:42.399888 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:22:45 crc kubenswrapper[4793]: I0130 15:22:45.470872 4793 generic.go:334] "Generic (PLEG): container finished" podID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerID="4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f" exitCode=0 Jan 30 15:22:45 crc kubenswrapper[4793]: I0130 15:22:45.471093 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerDied","Data":"4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f"} Jan 30 15:22:47 crc kubenswrapper[4793]: I0130 15:22:47.491926 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerStarted","Data":"9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd"} Jan 30 15:22:47 crc kubenswrapper[4793]: I0130 15:22:47.525827 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nlmdf" podStartSLOduration=3.179543475 podStartE2EDuration="16.525809867s" podCreationTimestamp="2026-01-30 15:22:31 +0000 UTC" firstStartedPulling="2026-01-30 15:22:33.357509003 +0000 UTC m=+5964.058857494" lastFinishedPulling="2026-01-30 15:22:46.703775395 +0000 UTC m=+5977.405123886" observedRunningTime="2026-01-30 15:22:47.517535253 +0000 UTC m=+5978.218883744" watchObservedRunningTime="2026-01-30 15:22:47.525809867 +0000 UTC m=+5978.227158358" Jan 30 15:22:52 crc kubenswrapper[4793]: I0130 15:22:52.079888 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:52 crc kubenswrapper[4793]: I0130 15:22:52.080414 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:22:53 crc kubenswrapper[4793]: I0130 15:22:53.142692 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:22:53 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:22:53 crc kubenswrapper[4793]: > Jan 30 15:22:53 crc kubenswrapper[4793]: I0130 15:22:53.398652 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:22:53 crc kubenswrapper[4793]: E0130 15:22:53.398967 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:22:55 crc kubenswrapper[4793]: I0130 15:22:55.462475 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vqxml_10c05bcf-ffb2-4175-b323-067804ea3391/control-plane-machine-set-operator/0.log" Jan 30 15:22:55 crc kubenswrapper[4793]: I0130 15:22:55.504325 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vqxml_10c05bcf-ffb2-4175-b323-067804ea3391/control-plane-machine-set-operator/1.log" Jan 30 15:22:55 crc kubenswrapper[4793]: I0130 15:22:55.765564 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-56g7n_afa7929d-37a8-4fa2-9733-158cab1c40ec/kube-rbac-proxy/0.log" Jan 30 15:22:55 crc kubenswrapper[4793]: I0130 15:22:55.793911 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-56g7n_afa7929d-37a8-4fa2-9733-158cab1c40ec/machine-api-operator/0.log" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.631719 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d86tm"] Jan 30 15:22:56 crc kubenswrapper[4793]: E0130 15:22:56.632497 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerName="extract-content" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.632806 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerName="extract-content" Jan 30 15:22:56 crc kubenswrapper[4793]: E0130 15:22:56.632827 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerName="extract-utilities" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.632839 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerName="extract-utilities" Jan 30 15:22:56 crc kubenswrapper[4793]: E0130 15:22:56.632852 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerName="registry-server" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.632860 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerName="registry-server" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.633196 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f9d306-0d7d-4586-a327-f32c5cfe12aa" containerName="registry-server" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.634913 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.646773 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d86tm"] Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.710631 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-utilities\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.710766 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-catalog-content\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.710806 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-578xc\" (UniqueName: \"kubernetes.io/projected/c35934a1-325a-4231-8dde-9357aab2af3f-kube-api-access-578xc\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.812519 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-578xc\" (UniqueName: \"kubernetes.io/projected/c35934a1-325a-4231-8dde-9357aab2af3f-kube-api-access-578xc\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.812704 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-utilities\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.812863 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-catalog-content\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.813352 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-utilities\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.813352 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-catalog-content\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.838628 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-578xc\" (UniqueName: \"kubernetes.io/projected/c35934a1-325a-4231-8dde-9357aab2af3f-kube-api-access-578xc\") pod \"certified-operators-d86tm\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:56 crc kubenswrapper[4793]: I0130 15:22:56.954657 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:22:57 crc kubenswrapper[4793]: I0130 15:22:57.560688 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d86tm"] Jan 30 15:22:57 crc kubenswrapper[4793]: I0130 15:22:57.591597 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d86tm" event={"ID":"c35934a1-325a-4231-8dde-9357aab2af3f","Type":"ContainerStarted","Data":"67b39464bec1710449607f7c3521e7192c615bc0f3447d2003996ee508c4b158"} Jan 30 15:22:58 crc kubenswrapper[4793]: I0130 15:22:58.603274 4793 generic.go:334] "Generic (PLEG): container finished" podID="c35934a1-325a-4231-8dde-9357aab2af3f" containerID="612fafe439052cb8b36014e5e1fdcf820fd924ff9c4da2d5454871cca09f6085" exitCode=0 Jan 30 15:22:58 crc kubenswrapper[4793]: I0130 15:22:58.603347 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d86tm" event={"ID":"c35934a1-325a-4231-8dde-9357aab2af3f","Type":"ContainerDied","Data":"612fafe439052cb8b36014e5e1fdcf820fd924ff9c4da2d5454871cca09f6085"} Jan 30 15:23:00 crc kubenswrapper[4793]: I0130 15:23:00.628183 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d86tm" event={"ID":"c35934a1-325a-4231-8dde-9357aab2af3f","Type":"ContainerStarted","Data":"b839195821be83a9e7374cf15a6233c62012a4b46d47003811c0c0bc8e77ddd9"} Jan 30 15:23:03 crc kubenswrapper[4793]: I0130 15:23:03.125828 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:23:03 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:23:03 crc kubenswrapper[4793]: > Jan 30 15:23:07 crc kubenswrapper[4793]: I0130 15:23:07.834291 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 15:23:08 crc kubenswrapper[4793]: I0130 15:23:08.399411 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:23:08 crc kubenswrapper[4793]: E0130 15:23:08.399630 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:23:10 crc kubenswrapper[4793]: I0130 15:23:10.493894 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-lm7l8_e88efb4a-1489-4847-adb4-230a8b5db6ef/cert-manager-webhook/0.log" Jan 30 15:23:12 crc kubenswrapper[4793]: I0130 15:23:12.833580 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 15:23:13 crc kubenswrapper[4793]: I0130 15:23:13.125573 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:23:13 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:23:13 crc kubenswrapper[4793]: > Jan 30 15:23:16 crc kubenswrapper[4793]: I0130 15:23:16.466892 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-tzjhq_8fd78cec-1c0f-427e-8224-4021da0ede3c/cert-manager-cainjector/0.log" Jan 30 15:23:16 crc kubenswrapper[4793]: I0130 15:23:16.649264 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-26t5l_1b680507-f432-4019-b372-d9452d89aa97/cert-manager-controller/0.log" Jan 30 15:23:16 crc kubenswrapper[4793]: I0130 15:23:16.880917 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d" containerName="ceilometer-central-agent" probeResult="failure" output=< Jan 30 15:23:16 crc kubenswrapper[4793]: Unkown error: Expecting value: line 1 column 1 (char 0) Jan 30 15:23:16 crc kubenswrapper[4793]: > Jan 30 15:23:16 crc kubenswrapper[4793]: I0130 15:23:16.881020 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 30 15:23:16 crc kubenswrapper[4793]: I0130 15:23:16.881962 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"67cd78805cfd71182011eb15b3b8e8abf6d3edb3e63f79fbcc6bba28ee33409f"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 30 15:23:16 crc kubenswrapper[4793]: I0130 15:23:16.882099 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d" containerName="ceilometer-central-agent" containerID="cri-o://67cd78805cfd71182011eb15b3b8e8abf6d3edb3e63f79fbcc6bba28ee33409f" gracePeriod=30 Jan 30 15:23:18 crc kubenswrapper[4793]: I0130 15:23:18.773789 4793 generic.go:334] "Generic (PLEG): container finished" podID="4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d" containerID="67cd78805cfd71182011eb15b3b8e8abf6d3edb3e63f79fbcc6bba28ee33409f" exitCode=0 Jan 30 15:23:18 crc kubenswrapper[4793]: I0130 15:23:18.773861 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d","Type":"ContainerDied","Data":"67cd78805cfd71182011eb15b3b8e8abf6d3edb3e63f79fbcc6bba28ee33409f"} Jan 30 15:23:19 crc kubenswrapper[4793]: I0130 15:23:19.662650 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:23:19 crc kubenswrapper[4793]: E0130 15:23:19.662964 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:23:20 crc kubenswrapper[4793]: I0130 15:23:20.235657 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 15:23:23 crc kubenswrapper[4793]: I0130 15:23:23.127307 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:23:23 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:23:23 crc kubenswrapper[4793]: > Jan 30 15:23:23 crc kubenswrapper[4793]: I0130 15:23:23.819271 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f9dd9b5-407b-47a1-91ee-5ee7a8b4816d","Type":"ContainerStarted","Data":"e0afffecc4a1d26ccd13cb484429754b46c22d6988a46071be25b6f7627edd50"} Jan 30 15:23:24 crc kubenswrapper[4793]: I0130 15:23:24.828525 4793 generic.go:334] "Generic (PLEG): container finished" podID="c35934a1-325a-4231-8dde-9357aab2af3f" containerID="b839195821be83a9e7374cf15a6233c62012a4b46d47003811c0c0bc8e77ddd9" exitCode=0 Jan 30 15:23:24 crc kubenswrapper[4793]: I0130 15:23:24.828579 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d86tm" event={"ID":"c35934a1-325a-4231-8dde-9357aab2af3f","Type":"ContainerDied","Data":"b839195821be83a9e7374cf15a6233c62012a4b46d47003811c0c0bc8e77ddd9"} Jan 30 15:23:25 crc kubenswrapper[4793]: I0130 15:23:25.198837 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-kc5ft_5df01042-63fe-458a-b71d-d1f9bdf9ea66/nmstate-console-plugin/0.log" Jan 30 15:23:25 crc kubenswrapper[4793]: I0130 15:23:25.370901 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-dh9db_e635e428-77d8-44fb-baa4-1af4bd603c10/nmstate-handler/0.log" Jan 30 15:23:25 crc kubenswrapper[4793]: I0130 15:23:25.441057 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-2gwr6_1a7bdce5-b625-40ce-b674-a834fcd178a8/kube-rbac-proxy/0.log" Jan 30 15:23:25 crc kubenswrapper[4793]: I0130 15:23:25.469463 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-2gwr6_1a7bdce5-b625-40ce-b674-a834fcd178a8/nmstate-metrics/0.log" Jan 30 15:23:25 crc kubenswrapper[4793]: I0130 15:23:25.810313 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-9bsps_1f691ecb-c128-4332-a7ab-c4e173490f50/nmstate-operator/0.log" Jan 30 15:23:25 crc kubenswrapper[4793]: I0130 15:23:25.813461 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-hw489_68bcadc4-02c3-44c0-a252-0606ff1f0a09/nmstate-webhook/0.log" Jan 30 15:23:25 crc kubenswrapper[4793]: I0130 15:23:25.840593 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d86tm" event={"ID":"c35934a1-325a-4231-8dde-9357aab2af3f","Type":"ContainerStarted","Data":"64e3e8d3bc5b50d9a440eccb4f185891b26096515466621e198a14f5182466bd"} Jan 30 15:23:26 crc kubenswrapper[4793]: I0130 15:23:26.870250 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d86tm" podStartSLOduration=4.071586866 podStartE2EDuration="30.870228055s" podCreationTimestamp="2026-01-30 15:22:56 +0000 UTC" firstStartedPulling="2026-01-30 15:22:58.605664051 +0000 UTC m=+5989.307012542" lastFinishedPulling="2026-01-30 15:23:25.40430524 +0000 UTC m=+6016.105653731" observedRunningTime="2026-01-30 15:23:26.866552285 +0000 UTC m=+6017.567900776" watchObservedRunningTime="2026-01-30 15:23:26.870228055 +0000 UTC m=+6017.571576546" Jan 30 15:23:26 crc kubenswrapper[4793]: I0130 15:23:26.955660 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:23:26 crc kubenswrapper[4793]: I0130 15:23:26.955703 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:23:28 crc kubenswrapper[4793]: I0130 15:23:28.013874 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-d86tm" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="registry-server" probeResult="failure" output=< Jan 30 15:23:28 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:23:28 crc kubenswrapper[4793]: > Jan 30 15:23:33 crc kubenswrapper[4793]: I0130 15:23:33.139517 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:23:33 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:23:33 crc kubenswrapper[4793]: > Jan 30 15:23:34 crc kubenswrapper[4793]: I0130 15:23:34.398815 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:23:34 crc kubenswrapper[4793]: E0130 15:23:34.399182 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:23:37 crc kubenswrapper[4793]: I0130 15:23:37.005970 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:23:37 crc kubenswrapper[4793]: I0130 15:23:37.066456 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:23:38 crc kubenswrapper[4793]: I0130 15:23:38.170961 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d86tm"] Jan 30 15:23:38 crc kubenswrapper[4793]: I0130 15:23:38.958723 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d86tm" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="registry-server" containerID="cri-o://64e3e8d3bc5b50d9a440eccb4f185891b26096515466621e198a14f5182466bd" gracePeriod=2 Jan 30 15:23:39 crc kubenswrapper[4793]: I0130 15:23:39.972791 4793 generic.go:334] "Generic (PLEG): container finished" podID="c35934a1-325a-4231-8dde-9357aab2af3f" containerID="64e3e8d3bc5b50d9a440eccb4f185891b26096515466621e198a14f5182466bd" exitCode=0 Jan 30 15:23:39 crc kubenswrapper[4793]: I0130 15:23:39.973066 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d86tm" event={"ID":"c35934a1-325a-4231-8dde-9357aab2af3f","Type":"ContainerDied","Data":"64e3e8d3bc5b50d9a440eccb4f185891b26096515466621e198a14f5182466bd"} Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.128371 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.312720 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-utilities\") pod \"c35934a1-325a-4231-8dde-9357aab2af3f\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.312880 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-catalog-content\") pod \"c35934a1-325a-4231-8dde-9357aab2af3f\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.312956 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-578xc\" (UniqueName: \"kubernetes.io/projected/c35934a1-325a-4231-8dde-9357aab2af3f-kube-api-access-578xc\") pod \"c35934a1-325a-4231-8dde-9357aab2af3f\" (UID: \"c35934a1-325a-4231-8dde-9357aab2af3f\") " Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.313835 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-utilities" (OuterVolumeSpecName: "utilities") pod "c35934a1-325a-4231-8dde-9357aab2af3f" (UID: "c35934a1-325a-4231-8dde-9357aab2af3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.317384 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c35934a1-325a-4231-8dde-9357aab2af3f-kube-api-access-578xc" (OuterVolumeSpecName: "kube-api-access-578xc") pod "c35934a1-325a-4231-8dde-9357aab2af3f" (UID: "c35934a1-325a-4231-8dde-9357aab2af3f"). InnerVolumeSpecName "kube-api-access-578xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.367301 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c35934a1-325a-4231-8dde-9357aab2af3f" (UID: "c35934a1-325a-4231-8dde-9357aab2af3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.415296 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-578xc\" (UniqueName: \"kubernetes.io/projected/c35934a1-325a-4231-8dde-9357aab2af3f-kube-api-access-578xc\") on node \"crc\" DevicePath \"\"" Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.415545 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.415613 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35934a1-325a-4231-8dde-9357aab2af3f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.984300 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d86tm" event={"ID":"c35934a1-325a-4231-8dde-9357aab2af3f","Type":"ContainerDied","Data":"67b39464bec1710449607f7c3521e7192c615bc0f3447d2003996ee508c4b158"} Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.984544 4793 scope.go:117] "RemoveContainer" containerID="64e3e8d3bc5b50d9a440eccb4f185891b26096515466621e198a14f5182466bd" Jan 30 15:23:40 crc kubenswrapper[4793]: I0130 15:23:40.984354 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d86tm" Jan 30 15:23:41 crc kubenswrapper[4793]: I0130 15:23:41.009821 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d86tm"] Jan 30 15:23:41 crc kubenswrapper[4793]: I0130 15:23:41.020622 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d86tm"] Jan 30 15:23:41 crc kubenswrapper[4793]: I0130 15:23:41.024863 4793 scope.go:117] "RemoveContainer" containerID="b839195821be83a9e7374cf15a6233c62012a4b46d47003811c0c0bc8e77ddd9" Jan 30 15:23:41 crc kubenswrapper[4793]: I0130 15:23:41.063643 4793 scope.go:117] "RemoveContainer" containerID="612fafe439052cb8b36014e5e1fdcf820fd924ff9c4da2d5454871cca09f6085" Jan 30 15:23:42 crc kubenswrapper[4793]: I0130 15:23:42.413755 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" path="/var/lib/kubelet/pods/c35934a1-325a-4231-8dde-9357aab2af3f/volumes" Jan 30 15:23:43 crc kubenswrapper[4793]: I0130 15:23:43.135635 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:23:43 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:23:43 crc kubenswrapper[4793]: > Jan 30 15:23:46 crc kubenswrapper[4793]: I0130 15:23:46.398670 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:23:46 crc kubenswrapper[4793]: E0130 15:23:46.399145 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:23:53 crc kubenswrapper[4793]: I0130 15:23:53.131998 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:23:53 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:23:53 crc kubenswrapper[4793]: > Jan 30 15:23:57 crc kubenswrapper[4793]: I0130 15:23:57.424748 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7nlfd_34253a93-968b-47e2-aa0d-43ddb72f29f5/kube-rbac-proxy/0.log" Jan 30 15:23:57 crc kubenswrapper[4793]: I0130 15:23:57.548748 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7nlfd_34253a93-968b-47e2-aa0d-43ddb72f29f5/controller/0.log" Jan 30 15:23:57 crc kubenswrapper[4793]: I0130 15:23:57.710315 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-frr-files/0.log" Jan 30 15:23:57 crc kubenswrapper[4793]: I0130 15:23:57.863982 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-frr-files/0.log" Jan 30 15:23:57 crc kubenswrapper[4793]: I0130 15:23:57.904407 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-metrics/0.log" Jan 30 15:23:57 crc kubenswrapper[4793]: I0130 15:23:57.958894 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-reloader/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.006918 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-reloader/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.247701 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-metrics/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.255975 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-metrics/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.293865 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-frr-files/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.300360 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-reloader/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.470744 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-reloader/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.474916 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-metrics/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.521531 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/cp-frr-files/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.580856 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/controller/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.719916 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/kube-rbac-proxy/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.740185 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/frr-metrics/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.834549 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/kube-rbac-proxy-frr/0.log" Jan 30 15:23:58 crc kubenswrapper[4793]: I0130 15:23:58.960957 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/reloader/0.log" Jan 30 15:23:59 crc kubenswrapper[4793]: I0130 15:23:59.179763 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-4p6gx_e5a76649-d081-4224-baca-095ca1ffadfd/frr-k8s-webhook-server/0.log" Jan 30 15:23:59 crc kubenswrapper[4793]: I0130 15:23:59.437089 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7fbd4d697c-ndglw_75266e51-59ee-432d-b56a-ba972e5ff25b/manager/0.log" Jan 30 15:23:59 crc kubenswrapper[4793]: I0130 15:23:59.564817 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6446fc49bd-rzbbm_45949f1b-1075-4d7f-9007-8525e0364a55/webhook-server/0.log" Jan 30 15:23:59 crc kubenswrapper[4793]: I0130 15:23:59.896251 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g9hvr_519ea47c-0d76-44cb-af34-823c71e508c9/kube-rbac-proxy/0.log" Jan 30 15:24:00 crc kubenswrapper[4793]: I0130 15:24:00.386277 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vsdkv_fd03c93b-a2a7-4a2f-9292-29c4e7fe9640/frr/0.log" Jan 30 15:24:00 crc kubenswrapper[4793]: I0130 15:24:00.788743 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g9hvr_519ea47c-0d76-44cb-af34-823c71e508c9/speaker/0.log" Jan 30 15:24:01 crc kubenswrapper[4793]: I0130 15:24:01.398650 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:24:01 crc kubenswrapper[4793]: E0130 15:24:01.399003 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:24:03 crc kubenswrapper[4793]: I0130 15:24:03.210418 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:24:03 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:24:03 crc kubenswrapper[4793]: > Jan 30 15:24:04 crc kubenswrapper[4793]: I0130 15:24:04.929230 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-vsdkv" podUID="fd03c93b-a2a7-4a2f-9292-29c4e7fe9640" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 15:24:12 crc kubenswrapper[4793]: I0130 15:24:12.398681 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:24:12 crc kubenswrapper[4793]: E0130 15:24:12.399333 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:24:13 crc kubenswrapper[4793]: I0130 15:24:13.137550 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:24:13 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:24:13 crc kubenswrapper[4793]: > Jan 30 15:24:16 crc kubenswrapper[4793]: I0130 15:24:16.167273 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/util/0.log" Jan 30 15:24:16 crc kubenswrapper[4793]: I0130 15:24:16.440723 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/util/0.log" Jan 30 15:24:16 crc kubenswrapper[4793]: I0130 15:24:16.512003 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/pull/0.log" Jan 30 15:24:16 crc kubenswrapper[4793]: I0130 15:24:16.533898 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/pull/0.log" Jan 30 15:24:16 crc kubenswrapper[4793]: I0130 15:24:16.803783 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/util/0.log" Jan 30 15:24:16 crc kubenswrapper[4793]: I0130 15:24:16.812304 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/pull/0.log" Jan 30 15:24:16 crc kubenswrapper[4793]: I0130 15:24:16.814425 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck2d29_7bd35260-c3c5-4f56-b2ba-d47ca60144d8/extract/0.log" Jan 30 15:24:17 crc kubenswrapper[4793]: I0130 15:24:17.181263 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/util/0.log" Jan 30 15:24:17 crc kubenswrapper[4793]: I0130 15:24:17.600029 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/pull/0.log" Jan 30 15:24:17 crc kubenswrapper[4793]: I0130 15:24:17.607266 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/util/0.log" Jan 30 15:24:17 crc kubenswrapper[4793]: I0130 15:24:17.621657 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/pull/0.log" Jan 30 15:24:17 crc kubenswrapper[4793]: I0130 15:24:17.929287 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/extract/0.log" Jan 30 15:24:18 crc kubenswrapper[4793]: I0130 15:24:18.145773 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/pull/0.log" Jan 30 15:24:18 crc kubenswrapper[4793]: I0130 15:24:18.207925 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h54l4_cd0e9042-d9db-4b5e-98b9-31ab2b3c4120/util/0.log" Jan 30 15:24:18 crc kubenswrapper[4793]: I0130 15:24:18.331745 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-utilities/0.log" Jan 30 15:24:18 crc kubenswrapper[4793]: I0130 15:24:18.549831 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-content/0.log" Jan 30 15:24:18 crc kubenswrapper[4793]: I0130 15:24:18.588145 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-utilities/0.log" Jan 30 15:24:18 crc kubenswrapper[4793]: I0130 15:24:18.633506 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-content/0.log" Jan 30 15:24:18 crc kubenswrapper[4793]: I0130 15:24:18.763216 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-utilities/0.log" Jan 30 15:24:18 crc kubenswrapper[4793]: I0130 15:24:18.837133 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/extract-content/0.log" Jan 30 15:24:19 crc kubenswrapper[4793]: I0130 15:24:19.049991 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-utilities/0.log" Jan 30 15:24:19 crc kubenswrapper[4793]: I0130 15:24:19.527656 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-utilities/0.log" Jan 30 15:24:19 crc kubenswrapper[4793]: I0130 15:24:19.580471 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-content/0.log" Jan 30 15:24:19 crc kubenswrapper[4793]: I0130 15:24:19.645207 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-content/0.log" Jan 30 15:24:19 crc kubenswrapper[4793]: I0130 15:24:19.662458 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-67xsr_4a0cd3b8-afdf-4eb1-b818-565ce4d0647d/registry-server/0.log" Jan 30 15:24:19 crc kubenswrapper[4793]: I0130 15:24:19.845360 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-content/0.log" Jan 30 15:24:20 crc kubenswrapper[4793]: I0130 15:24:20.012033 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zkjbp_5834bf4b-676f-4ece-bcee-28949a7109ca/marketplace-operator/0.log" Jan 30 15:24:20 crc kubenswrapper[4793]: I0130 15:24:20.374134 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-utilities/0.log" Jan 30 15:24:20 crc kubenswrapper[4793]: I0130 15:24:20.601040 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/extract-utilities/0.log" Jan 30 15:24:20 crc kubenswrapper[4793]: I0130 15:24:20.680352 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-utilities/0.log" Jan 30 15:24:20 crc kubenswrapper[4793]: I0130 15:24:20.833095 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-content/0.log" Jan 30 15:24:21 crc kubenswrapper[4793]: I0130 15:24:21.032559 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-content/0.log" Jan 30 15:24:21 crc kubenswrapper[4793]: I0130 15:24:21.243178 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-utilities/0.log" Jan 30 15:24:21 crc kubenswrapper[4793]: I0130 15:24:21.326412 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/extract-content/0.log" Jan 30 15:24:21 crc kubenswrapper[4793]: I0130 15:24:21.707606 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rgznc_79353c7a-f5cf-43e5-9c5a-443565d0cca7/registry-server/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.002379 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lcb4v_adcaff8e-ed88-4fa1-af55-aedc60d35481/registry-server/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.005298 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nlmdf_ba5c5be7-e683-443f-a3b6-7b3507b68aa6/extract-utilities/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.340771 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nlmdf_ba5c5be7-e683-443f-a3b6-7b3507b68aa6/extract-content/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.340856 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nlmdf_ba5c5be7-e683-443f-a3b6-7b3507b68aa6/extract-content/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.354939 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nlmdf_ba5c5be7-e683-443f-a3b6-7b3507b68aa6/extract-utilities/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.574347 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nlmdf_ba5c5be7-e683-443f-a3b6-7b3507b68aa6/extract-content/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.578717 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nlmdf_ba5c5be7-e683-443f-a3b6-7b3507b68aa6/registry-server/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.603470 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nlmdf_ba5c5be7-e683-443f-a3b6-7b3507b68aa6/extract-utilities/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.689681 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-utilities/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.924796 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-utilities/0.log" Jan 30 15:24:22 crc kubenswrapper[4793]: I0130 15:24:22.982413 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-content/0.log" Jan 30 15:24:23 crc kubenswrapper[4793]: I0130 15:24:23.013448 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-content/0.log" Jan 30 15:24:23 crc kubenswrapper[4793]: I0130 15:24:23.131897 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:24:23 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:24:23 crc kubenswrapper[4793]: > Jan 30 15:24:23 crc kubenswrapper[4793]: I0130 15:24:23.131988 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:24:23 crc kubenswrapper[4793]: I0130 15:24:23.132721 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd"} pod="openshift-marketplace/redhat-operators-nlmdf" containerMessage="Container registry-server failed startup probe, will be restarted" Jan 30 15:24:23 crc kubenswrapper[4793]: I0130 15:24:23.132767 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" containerID="cri-o://9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd" gracePeriod=30 Jan 30 15:24:23 crc kubenswrapper[4793]: I0130 15:24:23.147162 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-utilities/0.log" Jan 30 15:24:23 crc kubenswrapper[4793]: I0130 15:24:23.163084 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/extract-content/0.log" Jan 30 15:24:23 crc kubenswrapper[4793]: I0130 15:24:23.950873 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5rxw_6be7bc1b-60e4-429d-b706-90063b00442e/registry-server/0.log" Jan 30 15:24:24 crc kubenswrapper[4793]: I0130 15:24:24.399957 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:24:24 crc kubenswrapper[4793]: E0130 15:24:24.400923 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:24:37 crc kubenswrapper[4793]: I0130 15:24:37.398880 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:24:37 crc kubenswrapper[4793]: E0130 15:24:37.399550 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rdsch_openshift-machine-config-operator(f59a12e8-194c-4874-a9ef-2fc58c18fbbe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" Jan 30 15:24:48 crc kubenswrapper[4793]: I0130 15:24:48.398872 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:24:48 crc kubenswrapper[4793]: I0130 15:24:48.763701 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"7166f9d0cce33b612a836c2dfa046b2203b8a1eca0d3b045f83e75288acbdb6e"} Jan 30 15:24:49 crc kubenswrapper[4793]: I0130 15:24:49.775699 4793 generic.go:334] "Generic (PLEG): container finished" podID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerID="9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd" exitCode=0 Jan 30 15:24:49 crc kubenswrapper[4793]: I0130 15:24:49.775784 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerDied","Data":"9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd"} Jan 30 15:24:49 crc kubenswrapper[4793]: I0130 15:24:49.776044 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerStarted","Data":"b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0"} Jan 30 15:24:52 crc kubenswrapper[4793]: I0130 15:24:52.079889 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:24:52 crc kubenswrapper[4793]: I0130 15:24:52.080442 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:24:53 crc kubenswrapper[4793]: I0130 15:24:53.163262 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:24:53 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:24:53 crc kubenswrapper[4793]: > Jan 30 15:25:03 crc kubenswrapper[4793]: I0130 15:25:03.145170 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:25:03 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:25:03 crc kubenswrapper[4793]: > Jan 30 15:25:13 crc kubenswrapper[4793]: I0130 15:25:13.121102 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" probeResult="failure" output=< Jan 30 15:25:13 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Jan 30 15:25:13 crc kubenswrapper[4793]: > Jan 30 15:25:22 crc kubenswrapper[4793]: I0130 15:25:22.127137 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:25:22 crc kubenswrapper[4793]: I0130 15:25:22.189984 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:25:22 crc kubenswrapper[4793]: I0130 15:25:22.369308 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nlmdf"] Jan 30 15:25:24 crc kubenswrapper[4793]: I0130 15:25:24.095122 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nlmdf" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" containerID="cri-o://b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0" gracePeriod=2 Jan 30 15:25:24 crc kubenswrapper[4793]: I0130 15:25:24.933813 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.104443 4793 generic.go:334] "Generic (PLEG): container finished" podID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerID="b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0" exitCode=0 Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.104544 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerDied","Data":"b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0"} Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.104854 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlmdf" event={"ID":"ba5c5be7-e683-443f-a3b6-7b3507b68aa6","Type":"ContainerDied","Data":"8372c971f9f6c2985247616cba22145cd94668d2cdaaebf62f2b83a40bacf8bb"} Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.104884 4793 scope.go:117] "RemoveContainer" containerID="b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.104560 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlmdf" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.127111 4793 scope.go:117] "RemoveContainer" containerID="9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.127753 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42qfb\" (UniqueName: \"kubernetes.io/projected/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-kube-api-access-42qfb\") pod \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.127946 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-catalog-content\") pod \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.128017 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-utilities\") pod \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\" (UID: \"ba5c5be7-e683-443f-a3b6-7b3507b68aa6\") " Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.128871 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-utilities" (OuterVolumeSpecName: "utilities") pod "ba5c5be7-e683-443f-a3b6-7b3507b68aa6" (UID: "ba5c5be7-e683-443f-a3b6-7b3507b68aa6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.134020 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-kube-api-access-42qfb" (OuterVolumeSpecName: "kube-api-access-42qfb") pod "ba5c5be7-e683-443f-a3b6-7b3507b68aa6" (UID: "ba5c5be7-e683-443f-a3b6-7b3507b68aa6"). InnerVolumeSpecName "kube-api-access-42qfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.201720 4793 scope.go:117] "RemoveContainer" containerID="4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.223487 4793 scope.go:117] "RemoveContainer" containerID="96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.230657 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42qfb\" (UniqueName: \"kubernetes.io/projected/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-kube-api-access-42qfb\") on node \"crc\" DevicePath \"\"" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.230698 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.268429 4793 scope.go:117] "RemoveContainer" containerID="b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0" Jan 30 15:25:25 crc kubenswrapper[4793]: E0130 15:25:25.270600 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0\": container with ID starting with b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0 not found: ID does not exist" containerID="b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.270635 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0"} err="failed to get container status \"b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0\": rpc error: code = NotFound desc = could not find container \"b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0\": container with ID starting with b3d0749c7df98c795e61788945ced7ddba8210dd800b7a4b7f4520cb92b1d7d0 not found: ID does not exist" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.270672 4793 scope.go:117] "RemoveContainer" containerID="9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd" Jan 30 15:25:25 crc kubenswrapper[4793]: E0130 15:25:25.270960 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd\": container with ID starting with 9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd not found: ID does not exist" containerID="9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.271011 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd"} err="failed to get container status \"9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd\": rpc error: code = NotFound desc = could not find container \"9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd\": container with ID starting with 9b0b6aa8e543470386e75729c139f54dec46314f2c5822217457aa4824ae42bd not found: ID does not exist" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.271032 4793 scope.go:117] "RemoveContainer" containerID="4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f" Jan 30 15:25:25 crc kubenswrapper[4793]: E0130 15:25:25.271318 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f\": container with ID starting with 4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f not found: ID does not exist" containerID="4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.271340 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f"} err="failed to get container status \"4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f\": rpc error: code = NotFound desc = could not find container \"4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f\": container with ID starting with 4d66c69d3ecf0bca1222126c2db0717676923c0654a77459b14c785cef887e9f not found: ID does not exist" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.271354 4793 scope.go:117] "RemoveContainer" containerID="96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997" Jan 30 15:25:25 crc kubenswrapper[4793]: E0130 15:25:25.271616 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997\": container with ID starting with 96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997 not found: ID does not exist" containerID="96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.271700 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997"} err="failed to get container status \"96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997\": rpc error: code = NotFound desc = could not find container \"96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997\": container with ID starting with 96eafdb6f9ddae51850a8ca55821681a5d4673b2d52f5f7801f5100df5756997 not found: ID does not exist" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.279367 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba5c5be7-e683-443f-a3b6-7b3507b68aa6" (UID: "ba5c5be7-e683-443f-a3b6-7b3507b68aa6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.332423 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5c5be7-e683-443f-a3b6-7b3507b68aa6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.453861 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nlmdf"] Jan 30 15:25:25 crc kubenswrapper[4793]: I0130 15:25:25.465094 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nlmdf"] Jan 30 15:25:26 crc kubenswrapper[4793]: I0130 15:25:26.416908 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" path="/var/lib/kubelet/pods/ba5c5be7-e683-443f-a3b6-7b3507b68aa6/volumes" Jan 30 15:26:01 crc kubenswrapper[4793]: I0130 15:26:01.816799 4793 scope.go:117] "RemoveContainer" containerID="86e00e31965f1b3c0ea7cf7b438eeaa03e0e567fc25ab2389b6dc1be13ddc91b" Jan 30 15:26:59 crc kubenswrapper[4793]: I0130 15:26:59.259723 4793 generic.go:334] "Generic (PLEG): container finished" podID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerID="bcc4bc21a6c12cae1a4c2db58d26bdd2be9a4e12bd23b3f347d467b22b7270a5" exitCode=0 Jan 30 15:26:59 crc kubenswrapper[4793]: I0130 15:26:59.259837 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5swb7/must-gather-9zdpz" event={"ID":"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72","Type":"ContainerDied","Data":"bcc4bc21a6c12cae1a4c2db58d26bdd2be9a4e12bd23b3f347d467b22b7270a5"} Jan 30 15:26:59 crc kubenswrapper[4793]: I0130 15:26:59.260923 4793 scope.go:117] "RemoveContainer" containerID="bcc4bc21a6c12cae1a4c2db58d26bdd2be9a4e12bd23b3f347d467b22b7270a5" Jan 30 15:26:59 crc kubenswrapper[4793]: I0130 15:26:59.373901 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5swb7_must-gather-9zdpz_9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72/gather/0.log" Jan 30 15:27:02 crc kubenswrapper[4793]: I0130 15:27:02.016413 4793 scope.go:117] "RemoveContainer" containerID="85e030152ec5fa9dd3b51151a0867969b87294517f632303c2c8686222780d3f" Jan 30 15:27:12 crc kubenswrapper[4793]: I0130 15:27:12.415001 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:27:12 crc kubenswrapper[4793]: I0130 15:27:12.415733 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:27:13 crc kubenswrapper[4793]: I0130 15:27:13.861334 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5swb7/must-gather-9zdpz"] Jan 30 15:27:13 crc kubenswrapper[4793]: I0130 15:27:13.862302 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5swb7/must-gather-9zdpz" podUID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerName="copy" containerID="cri-o://4941afb1ffe31f3ef59ded56a75fac16d895a4e8c097ba8e151ea8b4f01a6144" gracePeriod=2 Jan 30 15:27:13 crc kubenswrapper[4793]: I0130 15:27:13.871283 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5swb7/must-gather-9zdpz"] Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.431510 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5swb7_must-gather-9zdpz_9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72/copy/0.log" Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.436637 4793 generic.go:334] "Generic (PLEG): container finished" podID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerID="4941afb1ffe31f3ef59ded56a75fac16d895a4e8c097ba8e151ea8b4f01a6144" exitCode=143 Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.636973 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5swb7_must-gather-9zdpz_9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72/copy/0.log" Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.637831 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.745666 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm2xv\" (UniqueName: \"kubernetes.io/projected/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-kube-api-access-gm2xv\") pod \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\" (UID: \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\") " Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.745742 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-must-gather-output\") pod \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\" (UID: \"9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72\") " Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.762453 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-kube-api-access-gm2xv" (OuterVolumeSpecName: "kube-api-access-gm2xv") pod "9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" (UID: "9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72"). InnerVolumeSpecName "kube-api-access-gm2xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.847920 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm2xv\" (UniqueName: \"kubernetes.io/projected/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-kube-api-access-gm2xv\") on node \"crc\" DevicePath \"\"" Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.931147 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" (UID: "9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:27:14 crc kubenswrapper[4793]: I0130 15:27:14.950336 4793 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 15:27:15 crc kubenswrapper[4793]: I0130 15:27:15.446164 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5swb7_must-gather-9zdpz_9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72/copy/0.log" Jan 30 15:27:15 crc kubenswrapper[4793]: I0130 15:27:15.446684 4793 scope.go:117] "RemoveContainer" containerID="4941afb1ffe31f3ef59ded56a75fac16d895a4e8c097ba8e151ea8b4f01a6144" Jan 30 15:27:15 crc kubenswrapper[4793]: I0130 15:27:15.446778 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5swb7/must-gather-9zdpz" Jan 30 15:27:15 crc kubenswrapper[4793]: I0130 15:27:15.466457 4793 scope.go:117] "RemoveContainer" containerID="bcc4bc21a6c12cae1a4c2db58d26bdd2be9a4e12bd23b3f347d467b22b7270a5" Jan 30 15:27:16 crc kubenswrapper[4793]: I0130 15:27:16.411830 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" path="/var/lib/kubelet/pods/9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72/volumes" Jan 30 15:27:42 crc kubenswrapper[4793]: I0130 15:27:42.413434 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:27:42 crc kubenswrapper[4793]: I0130 15:27:42.415306 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:28:12 crc kubenswrapper[4793]: I0130 15:28:12.413317 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:28:12 crc kubenswrapper[4793]: I0130 15:28:12.413946 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:28:12 crc kubenswrapper[4793]: I0130 15:28:12.414006 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" Jan 30 15:28:12 crc kubenswrapper[4793]: I0130 15:28:12.414905 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7166f9d0cce33b612a836c2dfa046b2203b8a1eca0d3b045f83e75288acbdb6e"} pod="openshift-machine-config-operator/machine-config-daemon-rdsch" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 15:28:12 crc kubenswrapper[4793]: I0130 15:28:12.414980 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" containerID="cri-o://7166f9d0cce33b612a836c2dfa046b2203b8a1eca0d3b045f83e75288acbdb6e" gracePeriod=600 Jan 30 15:28:13 crc kubenswrapper[4793]: I0130 15:28:13.022833 4793 generic.go:334] "Generic (PLEG): container finished" podID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerID="7166f9d0cce33b612a836c2dfa046b2203b8a1eca0d3b045f83e75288acbdb6e" exitCode=0 Jan 30 15:28:13 crc kubenswrapper[4793]: I0130 15:28:13.022953 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerDied","Data":"7166f9d0cce33b612a836c2dfa046b2203b8a1eca0d3b045f83e75288acbdb6e"} Jan 30 15:28:13 crc kubenswrapper[4793]: I0130 15:28:13.023501 4793 scope.go:117] "RemoveContainer" containerID="17da570e10708d55791cd7d48d90aab97998518dd7fe3d586f254af632decbce" Jan 30 15:28:14 crc kubenswrapper[4793]: I0130 15:28:14.036180 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" event={"ID":"f59a12e8-194c-4874-a9ef-2fc58c18fbbe","Type":"ContainerStarted","Data":"a1b734ff73ea9573c19a8fab41ab955c2ee3f3e6aa5ff281c71092fb8c35b49b"} Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.245273 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jk5h7"] Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.247342 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.247428 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.247499 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="extract-utilities" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.247847 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="extract-utilities" Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.247924 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerName="copy" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.247985 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerName="copy" Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.248072 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.248128 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.248192 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="extract-utilities" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.248248 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="extract-utilities" Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.248311 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.248376 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.248437 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="extract-content" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.248493 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="extract-content" Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.248555 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="extract-content" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.248611 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="extract-content" Jan 30 15:28:44 crc kubenswrapper[4793]: E0130 15:28:44.248681 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerName="gather" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.248819 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerName="gather" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.249068 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.249150 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerName="copy" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.249219 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5c5be7-e683-443f-a3b6-7b3507b68aa6" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.249292 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e7a54e3-1dc8-4f06-a7f7-4f83e1ae5a72" containerName="gather" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.249365 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c35934a1-325a-4231-8dde-9357aab2af3f" containerName="registry-server" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.250720 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.261216 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jk5h7"] Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.389343 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6mjj\" (UniqueName: \"kubernetes.io/projected/1f41cf99-6474-4b53-b297-0290b4566657-kube-api-access-h6mjj\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.389424 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-utilities\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.389549 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-catalog-content\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.494494 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6mjj\" (UniqueName: \"kubernetes.io/projected/1f41cf99-6474-4b53-b297-0290b4566657-kube-api-access-h6mjj\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.494609 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-utilities\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.494736 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-catalog-content\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.495340 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-catalog-content\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.495888 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-utilities\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.529122 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6mjj\" (UniqueName: \"kubernetes.io/projected/1f41cf99-6474-4b53-b297-0290b4566657-kube-api-access-h6mjj\") pod \"community-operators-jk5h7\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:44 crc kubenswrapper[4793]: I0130 15:28:44.574289 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:45 crc kubenswrapper[4793]: I0130 15:28:45.231806 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jk5h7"] Jan 30 15:28:46 crc kubenswrapper[4793]: I0130 15:28:46.141641 4793 generic.go:334] "Generic (PLEG): container finished" podID="1f41cf99-6474-4b53-b297-0290b4566657" containerID="5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a" exitCode=0 Jan 30 15:28:46 crc kubenswrapper[4793]: I0130 15:28:46.141734 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jk5h7" event={"ID":"1f41cf99-6474-4b53-b297-0290b4566657","Type":"ContainerDied","Data":"5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a"} Jan 30 15:28:46 crc kubenswrapper[4793]: I0130 15:28:46.143178 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jk5h7" event={"ID":"1f41cf99-6474-4b53-b297-0290b4566657","Type":"ContainerStarted","Data":"76acece91eace693a7db849b9c561c197137451e4bc3f1f7ff8fcea4e1b97c9c"} Jan 30 15:28:46 crc kubenswrapper[4793]: I0130 15:28:46.143840 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 15:28:48 crc kubenswrapper[4793]: I0130 15:28:48.161930 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jk5h7" event={"ID":"1f41cf99-6474-4b53-b297-0290b4566657","Type":"ContainerStarted","Data":"0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f"} Jan 30 15:28:49 crc kubenswrapper[4793]: I0130 15:28:49.174759 4793 generic.go:334] "Generic (PLEG): container finished" podID="1f41cf99-6474-4b53-b297-0290b4566657" containerID="0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f" exitCode=0 Jan 30 15:28:49 crc kubenswrapper[4793]: I0130 15:28:49.174809 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jk5h7" event={"ID":"1f41cf99-6474-4b53-b297-0290b4566657","Type":"ContainerDied","Data":"0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f"} Jan 30 15:28:51 crc kubenswrapper[4793]: I0130 15:28:51.196807 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jk5h7" event={"ID":"1f41cf99-6474-4b53-b297-0290b4566657","Type":"ContainerStarted","Data":"4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751"} Jan 30 15:28:51 crc kubenswrapper[4793]: I0130 15:28:51.230732 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jk5h7" podStartSLOduration=3.07804903 podStartE2EDuration="7.230711971s" podCreationTimestamp="2026-01-30 15:28:44 +0000 UTC" firstStartedPulling="2026-01-30 15:28:46.143580015 +0000 UTC m=+6336.844928516" lastFinishedPulling="2026-01-30 15:28:50.296242966 +0000 UTC m=+6340.997591457" observedRunningTime="2026-01-30 15:28:51.220810048 +0000 UTC m=+6341.922158559" watchObservedRunningTime="2026-01-30 15:28:51.230711971 +0000 UTC m=+6341.932060462" Jan 30 15:28:54 crc kubenswrapper[4793]: I0130 15:28:54.575472 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:54 crc kubenswrapper[4793]: I0130 15:28:54.575946 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:54 crc kubenswrapper[4793]: I0130 15:28:54.628644 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:55 crc kubenswrapper[4793]: I0130 15:28:55.280584 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:55 crc kubenswrapper[4793]: I0130 15:28:55.335125 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jk5h7"] Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.250235 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jk5h7" podUID="1f41cf99-6474-4b53-b297-0290b4566657" containerName="registry-server" containerID="cri-o://4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751" gracePeriod=2 Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.698012 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.864154 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-utilities\") pod \"1f41cf99-6474-4b53-b297-0290b4566657\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.864222 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-catalog-content\") pod \"1f41cf99-6474-4b53-b297-0290b4566657\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.864454 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6mjj\" (UniqueName: \"kubernetes.io/projected/1f41cf99-6474-4b53-b297-0290b4566657-kube-api-access-h6mjj\") pod \"1f41cf99-6474-4b53-b297-0290b4566657\" (UID: \"1f41cf99-6474-4b53-b297-0290b4566657\") " Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.866450 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-utilities" (OuterVolumeSpecName: "utilities") pod "1f41cf99-6474-4b53-b297-0290b4566657" (UID: "1f41cf99-6474-4b53-b297-0290b4566657"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.878234 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f41cf99-6474-4b53-b297-0290b4566657-kube-api-access-h6mjj" (OuterVolumeSpecName: "kube-api-access-h6mjj") pod "1f41cf99-6474-4b53-b297-0290b4566657" (UID: "1f41cf99-6474-4b53-b297-0290b4566657"). InnerVolumeSpecName "kube-api-access-h6mjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.939588 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f41cf99-6474-4b53-b297-0290b4566657" (UID: "1f41cf99-6474-4b53-b297-0290b4566657"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.967374 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.967657 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f41cf99-6474-4b53-b297-0290b4566657-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:28:57 crc kubenswrapper[4793]: I0130 15:28:57.967759 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6mjj\" (UniqueName: \"kubernetes.io/projected/1f41cf99-6474-4b53-b297-0290b4566657-kube-api-access-h6mjj\") on node \"crc\" DevicePath \"\"" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.260869 4793 generic.go:334] "Generic (PLEG): container finished" podID="1f41cf99-6474-4b53-b297-0290b4566657" containerID="4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751" exitCode=0 Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.260925 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jk5h7" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.260947 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jk5h7" event={"ID":"1f41cf99-6474-4b53-b297-0290b4566657","Type":"ContainerDied","Data":"4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751"} Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.262192 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jk5h7" event={"ID":"1f41cf99-6474-4b53-b297-0290b4566657","Type":"ContainerDied","Data":"76acece91eace693a7db849b9c561c197137451e4bc3f1f7ff8fcea4e1b97c9c"} Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.262225 4793 scope.go:117] "RemoveContainer" containerID="4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.283416 4793 scope.go:117] "RemoveContainer" containerID="0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.312879 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jk5h7"] Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.325957 4793 scope.go:117] "RemoveContainer" containerID="5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.326594 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jk5h7"] Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.372547 4793 scope.go:117] "RemoveContainer" containerID="4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751" Jan 30 15:28:58 crc kubenswrapper[4793]: E0130 15:28:58.373505 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751\": container with ID starting with 4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751 not found: ID does not exist" containerID="4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.373547 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751"} err="failed to get container status \"4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751\": rpc error: code = NotFound desc = could not find container \"4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751\": container with ID starting with 4c4b7bf86bbc8267c215e4099a6e16eedca6c73a85e6c8c59caa07e49abaf751 not found: ID does not exist" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.373575 4793 scope.go:117] "RemoveContainer" containerID="0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f" Jan 30 15:28:58 crc kubenswrapper[4793]: E0130 15:28:58.373866 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f\": container with ID starting with 0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f not found: ID does not exist" containerID="0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.373891 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f"} err="failed to get container status \"0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f\": rpc error: code = NotFound desc = could not find container \"0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f\": container with ID starting with 0e36962388067b51a3f1395371a3ca92f5c6ee31e8662044b2d2a7449dd22e1f not found: ID does not exist" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.373909 4793 scope.go:117] "RemoveContainer" containerID="5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a" Jan 30 15:28:58 crc kubenswrapper[4793]: E0130 15:28:58.374250 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a\": container with ID starting with 5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a not found: ID does not exist" containerID="5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.374282 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a"} err="failed to get container status \"5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a\": rpc error: code = NotFound desc = could not find container \"5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a\": container with ID starting with 5429fbe5e52753e17055e722a2c25fba28ff3179ee19d3cff77353d83a754e0a not found: ID does not exist" Jan 30 15:28:58 crc kubenswrapper[4793]: I0130 15:28:58.413618 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f41cf99-6474-4b53-b297-0290b4566657" path="/var/lib/kubelet/pods/1f41cf99-6474-4b53-b297-0290b4566657/volumes" Jan 30 15:29:46 crc kubenswrapper[4793]: E0130 15:29:46.819403 4793 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.421s" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.150489 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm"] Jan 30 15:30:00 crc kubenswrapper[4793]: E0130 15:30:00.151671 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f41cf99-6474-4b53-b297-0290b4566657" containerName="extract-content" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.151692 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f41cf99-6474-4b53-b297-0290b4566657" containerName="extract-content" Jan 30 15:30:00 crc kubenswrapper[4793]: E0130 15:30:00.151714 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f41cf99-6474-4b53-b297-0290b4566657" containerName="extract-utilities" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.151725 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f41cf99-6474-4b53-b297-0290b4566657" containerName="extract-utilities" Jan 30 15:30:00 crc kubenswrapper[4793]: E0130 15:30:00.151754 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f41cf99-6474-4b53-b297-0290b4566657" containerName="registry-server" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.151763 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f41cf99-6474-4b53-b297-0290b4566657" containerName="registry-server" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.151971 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f41cf99-6474-4b53-b297-0290b4566657" containerName="registry-server" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.152896 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.157653 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.160143 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.160498 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm"] Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.292736 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a203b054-652c-4239-b471-4e7ef7665932-secret-volume\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.293135 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a203b054-652c-4239-b471-4e7ef7665932-config-volume\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.293230 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgnlp\" (UniqueName: \"kubernetes.io/projected/a203b054-652c-4239-b471-4e7ef7665932-kube-api-access-lgnlp\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.394989 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a203b054-652c-4239-b471-4e7ef7665932-secret-volume\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.395110 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a203b054-652c-4239-b471-4e7ef7665932-config-volume\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.395202 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgnlp\" (UniqueName: \"kubernetes.io/projected/a203b054-652c-4239-b471-4e7ef7665932-kube-api-access-lgnlp\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.396190 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a203b054-652c-4239-b471-4e7ef7665932-config-volume\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.411934 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a203b054-652c-4239-b471-4e7ef7665932-secret-volume\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.486714 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgnlp\" (UniqueName: \"kubernetes.io/projected/a203b054-652c-4239-b471-4e7ef7665932-kube-api-access-lgnlp\") pod \"collect-profiles-29496450-8sddm\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:00 crc kubenswrapper[4793]: I0130 15:30:00.775120 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:01 crc kubenswrapper[4793]: I0130 15:30:01.261521 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm"] Jan 30 15:30:02 crc kubenswrapper[4793]: I0130 15:30:02.086101 4793 generic.go:334] "Generic (PLEG): container finished" podID="a203b054-652c-4239-b471-4e7ef7665932" containerID="c337e6f14c81285f1bf99ab9b3d3d155367ee3babd91077c625549c27d6b85fe" exitCode=0 Jan 30 15:30:02 crc kubenswrapper[4793]: I0130 15:30:02.086186 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" event={"ID":"a203b054-652c-4239-b471-4e7ef7665932","Type":"ContainerDied","Data":"c337e6f14c81285f1bf99ab9b3d3d155367ee3babd91077c625549c27d6b85fe"} Jan 30 15:30:02 crc kubenswrapper[4793]: I0130 15:30:02.086449 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" event={"ID":"a203b054-652c-4239-b471-4e7ef7665932","Type":"ContainerStarted","Data":"f924ef12280d267664eb1609c2390e7c8fa089afcfea0c00e80a81a0aa9e10e5"} Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.439807 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.564188 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgnlp\" (UniqueName: \"kubernetes.io/projected/a203b054-652c-4239-b471-4e7ef7665932-kube-api-access-lgnlp\") pod \"a203b054-652c-4239-b471-4e7ef7665932\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.564433 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a203b054-652c-4239-b471-4e7ef7665932-secret-volume\") pod \"a203b054-652c-4239-b471-4e7ef7665932\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.564471 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a203b054-652c-4239-b471-4e7ef7665932-config-volume\") pod \"a203b054-652c-4239-b471-4e7ef7665932\" (UID: \"a203b054-652c-4239-b471-4e7ef7665932\") " Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.566597 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a203b054-652c-4239-b471-4e7ef7665932-config-volume" (OuterVolumeSpecName: "config-volume") pod "a203b054-652c-4239-b471-4e7ef7665932" (UID: "a203b054-652c-4239-b471-4e7ef7665932"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.570614 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a203b054-652c-4239-b471-4e7ef7665932-kube-api-access-lgnlp" (OuterVolumeSpecName: "kube-api-access-lgnlp") pod "a203b054-652c-4239-b471-4e7ef7665932" (UID: "a203b054-652c-4239-b471-4e7ef7665932"). InnerVolumeSpecName "kube-api-access-lgnlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.572291 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a203b054-652c-4239-b471-4e7ef7665932-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a203b054-652c-4239-b471-4e7ef7665932" (UID: "a203b054-652c-4239-b471-4e7ef7665932"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.667446 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a203b054-652c-4239-b471-4e7ef7665932-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.667507 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a203b054-652c-4239-b471-4e7ef7665932-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 15:30:03 crc kubenswrapper[4793]: I0130 15:30:03.667532 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgnlp\" (UniqueName: \"kubernetes.io/projected/a203b054-652c-4239-b471-4e7ef7665932-kube-api-access-lgnlp\") on node \"crc\" DevicePath \"\"" Jan 30 15:30:04 crc kubenswrapper[4793]: I0130 15:30:04.103356 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" event={"ID":"a203b054-652c-4239-b471-4e7ef7665932","Type":"ContainerDied","Data":"f924ef12280d267664eb1609c2390e7c8fa089afcfea0c00e80a81a0aa9e10e5"} Jan 30 15:30:04 crc kubenswrapper[4793]: I0130 15:30:04.103653 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f924ef12280d267664eb1609c2390e7c8fa089afcfea0c00e80a81a0aa9e10e5" Jan 30 15:30:04 crc kubenswrapper[4793]: I0130 15:30:04.103441 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496450-8sddm" Jan 30 15:30:04 crc kubenswrapper[4793]: I0130 15:30:04.525506 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r"] Jan 30 15:30:04 crc kubenswrapper[4793]: I0130 15:30:04.533525 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496405-ttc5r"] Jan 30 15:30:06 crc kubenswrapper[4793]: I0130 15:30:06.414094 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c63ff2c-cb24-48c2-9af7-05d299d8b36a" path="/var/lib/kubelet/pods/1c63ff2c-cb24-48c2-9af7-05d299d8b36a/volumes" Jan 30 15:30:42 crc kubenswrapper[4793]: I0130 15:30:42.414130 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:30:42 crc kubenswrapper[4793]: I0130 15:30:42.414773 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:31:02 crc kubenswrapper[4793]: I0130 15:31:02.175732 4793 scope.go:117] "RemoveContainer" containerID="2bb7033c2b6902fe7f3fb960e4da2010748828c26715bef2cd982381fe406b45" Jan 30 15:31:12 crc kubenswrapper[4793]: I0130 15:31:12.414020 4793 patch_prober.go:28] interesting pod/machine-config-daemon-rdsch container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:31:12 crc kubenswrapper[4793]: I0130 15:31:12.414634 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rdsch" podUID="f59a12e8-194c-4874-a9ef-2fc58c18fbbe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"